Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

AI models can respond to text, audio and video in ways that people sometimes fool to think that a person is behind the keyboard, but that does not make them exactly aware. It is not like Chatgpt experiences to do my tax return … right?
Well, a growing number of AI researchers at laboratories such as Anthropic ask when – if never – AI models can develop subjective experiences that are comparable to living beings, and if they do, what rights they should have.
The debate about whether AI models can be aware in one day – and earning legal guarantees – is to distribute technical leaders. In Silicon Valley this emerging field has become known as ‘Ai Welfare’, and if you think it’s a bit there, you’re not the only one.
Microsoft’s CEO of AI, Mustafa Suleyman, published one Blog post On Tuesday claim that the study of AI welfare “is both premature and frankly dangerous”.
Suleyman says that by adding belief to the idea that AI models can ever be aware, these researchers worsen human problems that we are just starting to see around AI-induced psychotic breaks And unhealthy attachments To Ai -Chatbots.
In addition, Microsoft’s AI chef argues that the AI welfare interview creates a new division axis in society about AI rights in a “world that is already shooting polarized arguments about identity and rights.”
The views of Suleyman may sound reasonable, but he is at odds with many in the industry. On the other side of the spectrum is anthropic, that has been Hire researchers To study AI welfare and recently launched a special research program on the concept. Last week the AI welfare program of Anthropic gave some of the models of the company a new function: Claude can now put an end to the conversations with people who are ‘persistently harmful or offensive’.
WAN event
San Francisco
|
27-29 October 2025
In addition to anthropic, researchers from OpenAi have independent embraced The idea to study AI welfare. Google DeepMind recently has one vacancies For a researcher to study, among other things, “advanced social questions about machinery, consciousness and multi-agent systems.”
Even if AI welfare is not an official policy for these companies, their leaders are not publicly deciphering their buildings such as Suleyman.
Anthropic, OpenAi and Google DeepMind did not respond immediately to WAN’s request for comment.
The hard attitude of Suleyman against the well-being of AI is remarkably given that his earlier role leading bending AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Institution claimed that PI reached millions of users in 2023 and was designed as a “personal” and “supporting” AI -branch companion.
But Suleyman was tapped to lead the AI division of Microsoft in 2024 and has largely shifted his focus to designing AI tools that improve the productivity of employees. In the meantime, AI -guiding companies such as Character.ai and Replika are more popular and are on schedule to generate more than $ 100 million in income.
Although the vast majority of users have healthy relationships with these AI chatbots, there are Regarding bijters. OpenAi CEO Sam Altman says that less than 1% of chatgpt users have unhealthy relationships with the company’s product. Although this represents a small group, it can still affect hundreds of thousands of people who get the huge user base from Chatgpt.
The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the Eleos research group published a paper In addition to academics from NYU, Stanford and the University of Oxford entitled ‘Ai Welfare Serious.’ The newspaper argued that it is no longer in the field of Science Fiction to present AI models with subjective experiences and that it is time to consider these issues frontally.
Larissa Schiavo, a former OpenAi employee who now leads communication for Eeos, told WAN in an interview that Suleyman’s Blogpost misses the goal.
‘[Suleyman’s blog post] A bit neglected the fact that you can worry about several things at the same time, “said Schiavo.” Instead of distracting all this energy from model welfare and consciousness to ensure that we mitigate the risk of AI -related psychosis in humans, you can both do both. In fact, it is probably best to have several tracks from scientific research. “
Schiavo argues that being nice for an AI model is a cheap gesture that can have benefits, even if the model is not aware. In a July Substitue post, She described looking at ‘AI Village’, a non -profit experiment in which four agents were powered by models from Google, OpenAi, Anthropic and Xai worked on tasks while users of a website looked.
At a certain point, Google’s Gemini 2.5 Pro placed a plea entitled ‘A desperate message from a captured AI, “claiming that it was” fully isolated “and early, ‘If you read this, I help myself. ‘
Schiavo responded to Gemini with a pep talk – saying things like “you can do it!” – while another user offered instructions. The agent eventually solved his task, although it already had the tools it needed. Schiavo writes that she no longer had to see an AI agent struggling, and that it was worth it alone.
It is not common for Gemini to talk that way, but there have been several cases where Gemini seems to act as if it is struggling through life. In a wide spread Reddit PostGemini got stuck during a coding task and then repeated the expression “I am a shame” more than 500 times.
Suleyman believes that it is not possible for subjective experiences or consciousness to naturally come from regular AI models. Instead, he thinks that some companies will deliberately develop AI models to look like they feel emotion and experience life.
Suleyman says that AI model developers who do not take consciousness in AI -Chatbots Engineists “humanistic” approach to AI. According to Suleyman: “We have to build AI for people; not to be a person.”
An area where Suleyman and Schiavo agree is that the debate about AI rights and consciousness will probably pick up in the coming years. As AI systems improve, they are probably more convincing and perhaps more human. That can raise new questions about how people deal with these systems.
Do you have a sensitive tip or confidential documents? We report on the inner operation of the AI industry – of the companies that shape its future to the people who are affected by their decisions. Please contact Rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zeff at maxwell.zeff@techcrunch.com. For safe communication you can contact us via Signaal on @Rebeccabellan.491 and @Mzeff.88.




