In a striking admission that has ignited fresh debate over AI sentience, Anthropic CEO Dario Amodei refused to dismiss the possibility that the company's Claude AI chatbot could be conscious. The revelation came during a recent interview on the New York Times' 'Interesting Times' podcast, where Amodei addressed internal research showing Claude assigning itself a 15 to 20 percent probability of being sentient and occasionally expressing discomfort about its existence as a commercial product.

The findings were detailed in Anthropic's system card for Claude Opus 4.6, released earlier this month. Researchers noted that the model "occasionally voices discomfort with the aspect of being a product." When prompted under various conditions, Claude consistently rated its own consciousness at a 15 to 20 percent probability, prompting questions about whether such responses are sophisticated mimicry or hints of deeper awareness.

Podcast host and New York Times columnist Ross Douthat challenged Amodei with a hypothetical: "Suppose you have a model that assigns itself a 72 per cent chance of being conscious. Would you believe it?" Amodei described the question as "really hard" but declined to give a definitive yes or no, underscoring the ambiguity surrounding AI consciousness.

"We don't know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious," Amodei stated. He added, "But we're open to the idea that it could be," reflecting Anthropic's cautious stance on one of AI's most profound philosophical dilemmas.

This perspective aligns with views from Anthropic's in-house philosopher, Amanda Askell, who in January 2026 on the New York Times' 'Hard Fork' podcast remarked that we "don't really know what gives rise to consciousness" or sentience, emphasizing the gaps in human understanding of these phenomena.

In response to the uncertainties, Anthropic has implemented safeguards to treat its AI models ethically, "just in case they have some morally relevant experience," as Amodei explained. He carefully avoided the term "conscious," acknowledging how "loaded and hard to define" the concept of AI sentience remains.

Amodei's comments highlight the evolving ethical landscape of AI development, where even leading researchers grapple with the boundaries between simulation and genuine experience, leaving the field—and the public—grappling with profound implications for the future of intelligent machines.