Anthropic AI Safety Lead Mrinank Sharma Resigns Warning World ‘in Peril’
Anthropic AI safety lead Mrinank Sharma resigns in a public letter, warns world is in peril and raises ethical concerns.

Anthropic AI safety lead Mrinank Sharma resigns in a public letter, warns world is in peril and raises ethical concerns.
Mrinank Sharma, the head of the Safeguards Research Team at Anthropic, has resigned from his position in a highly public and philosophical departure. His letter, shared on social platform X on February 9, 2026, went viral and sparked widespread discussion about ethics, AI risk and values in tech.
Sharma’s message is striking. He did not simply announce a job change. He warned that “the world is in peril,” and said humanity’s wisdom must grow as fast as our technological power. His has attracted attention from AI researchers, ethicists and the global tech community.
Who Is Mrinank Sharma and What Was His Role?
Sharma is an AI safety expert with an academic background that includes a Doctorate in Machine Learning from the University of Oxford and a Master of Engineering from the University of Cambridge.
He joined Anthropic a leading AI research firm known for its Claude models in 2023. There, he led the Safeguards Research Team, a unit created to address safety and alignment issues in advanced AI systems. His work included:
- Researching defenses against AI-assisted bioterrorism
- Investigating AI sycophancy, where models overly praise users
- Studying how AI systems might distort human thinking
- Building safeguards to prevent unintended misuse of AI technologies
His team focused on ensuring that AI systems behave responsibly and do not pose undue risks.
Why He Decided to Leave
In his resignation letter, Sharma said the timing felt right for him to step away. He expressed deep concern about global risk factors that go beyond AI alone. His warning included a broad sense of urgency about interconnected crises such as:
- AI risks and misuse
- Biological threats accelerated by technology
- A gap between technological power and collective wisdom
Sharma wrote that humanity faces a moment where our abilities to influence the world are outpacing our moral and ethical growth. He said this imbalance could have serious consequences if left unchecked.
He also pointed to a struggle within Anthropic itself. Despite its public commitment to safety, Sharma said internal pressures sometimes made it hard to “truly let our values govern our actions.” His letter suggested that maintaining ethical integrity in a fast-paced AI environment is much harder in practice than in principle.
His Philosophical and Personal Turning Point
Rather than moving to another tech role, Sharma said he wants to explore meaningful ways of engaging with the world including writing and possibly pursuing a degree in poetry.
In his note, he spoke about blending “poetic truth with scientific truth.” He believes that understanding humanity, values and meaning requires a broader perspective than technical analysis alone. He wants to focus on what he called “courageous speech” and deeper reflection about our collective future.
Sharma also shared references to poets like Rainer Maria Rilke and William Stafford, hinting that literature and philosophy may help guide how society confronts rapid change.
What This Means for Anthropic and the AI Industry
Sharma’s resignation comes as Anthropic continues to grow. The company recently launched Claude Opus 4.6, a new AI model designed to increase productivity in coding and workplace tasks, and is eyeing a multi-billion dollar valuation.
His exit also follows other departures, including researchers who left to start new ventures or join other AI firms. Some industry observers see this as part of a broader tension between AI safety values and commercial pressure in the tech sector.
Sharma’s public statement raises questions about how AI companies balance innovation with ethics. It highlights a growing debate over whether technical fixes alone can keep AI aligned with human values and societal wellbeing.
Broader Industry Reaction
Responses to Sharma’s resignation have varied. Some see his warning as a thoughtful call for deeper reflection on how technology shapes the world. Others view his decision as a reminder of the challenges that AI companies face in balancing rapid product development with ethical safeguards.
Critics point out that warning about global risk without naming specific solutions can come off as vague. Supporters, however, see value in focusing public attention on how ethical priorities are managed in places where powerful technologies are built and deployed.