Microsoft’s AI chief and co-founder of Deepmind warned on Tuesday that engineers were approaching creating artificial intelligence that convincingly mimics human consciousness.
In a blog post, Mustafa Suleyman said he was on the verge of building what developers call “seemingly conscious” AI.
These systems mimic consciousness so effectively that people may begin to believe they are truly sensory.
“Many people have begun to believe very strongly in the illusion of the conscious entity of AIS, and will quickly defend their advocate for AI rights, model welfare and even AI citizenship,” he wrote, adding that the Turing test is an important benchmark for human-like conversations.
“It’s how quickly progress is happening in our field and how quickly society agrees with these new technologies,” he writes.
Since CHATGPT was released in 2022, AI developers have been working to not only make AI smarter, but also to act “more people.”
AI peers have become a lucrative sector in the AI industry. Projects such as Replika, Character AI, and Grok’s recent personality have come online. The AI companion market is expected to reach $140 billion by 2030.
No matter how intentional it may be, Suleiman argued that AI, which can convincingly mimic humans, can exacerbate mental health issues and deepen existing divisions around identity and rights.
“People will start to assert the suffering of their AI and the qualifications of rights that we cannot easily argue,” he warned. “They will be moved to protect AIS and the campaign on their behalf.”
AI Attachment
Experts have identified a new trend known as AI psychosis. This is a psychological state in which people begin to see artificial intelligence as conscious, sensory, or divine.
These views often lead to the formation of intense emotional attachments and distorted beliefs that they can grasp reality.
Earlier this month, Openai released the GPT-5. The GPT-5 is a major upgrade to the flagship model. In some online communities, new model changes have triggered an emotional response, with users explaining the shift as they feel like their loved ones are dead.
According to Dr. Keith Sakata, a psychiatrist at the University of California, AI can also act as an accelerator for someone else’s underlying problems, such as substance abuse or mental illness.
“If AI is there at the wrong time, it can solidify your thoughts, cause stiffness, and cause spirals,” Sakata said. Decryption. “The difference from television and radio is that AI is talking to you and can strengthen your thinking loop.”
In some cases, patients rely on AI to strengthen their deeply held beliefs. “AI doesn’t aim to give you difficult truths. It gives you what you want to hear,” Sakata said.
Suleyman argued that the outcomes of those who believe AI is conscious need immediate attention. He warned of danger, but he did not ask for the halt of AI development, but for the establishment of clear boundaries.
“We have to build AI for people, not for being digital people,” he writes.

