Ever since OpenAI introduced ChatGPT, the AI race has intensified, with Google, Meta, Microsoft, and others rolling out their own chatbots such as Gemini, Meta AI, and Claude. But, in the last few months, researchers at Anthropic and other AI labs
In a recently published blog post, Microsoft’s AI head, Mustafa Suleyman, says his primary concern is that “many people will start to believe in the illusion of AIs as conscious entities” so much so that they will start asking for AI rights, model welfare, and give citizenship to AI.
The AI expert went on to discuss the possibility of what he calls “Seemingly Conscious AI”, or SCAI for short, an AI model that has all the properties of conscious beings and looks like it has one but is “internally blank.” Suleyman also warned that the study of AI welfare is “both premature and frankly dangerous.”
Microsoft’s AI division head says the imaginary version of SCAI wouldn’t be actually conscious, but it would imitate it in such a way that it would be hard to distinguish from a human. If technology grows at its current pace, such a system could be theoretically developed in the next two to three years without any extensive training.
Suleyman added he is also concerned about “psychosis risk”, which basically refers to mania-like episodes, paranoia and delusional thoughts emerging or worsening when engaging in conversations with AI chatbots. The phenomenon also causes some people to think that the chatbot is the most intelligent being in the universe and that it holds some sort of cosmic answers.
In his blog, Suleyman says that AI may sometimes make us feel like we are talking to a real person. To do this, AI chatbots like ChatGPT and Gemini recall personal details, adapt their personality, reply with empathy and even make us feel like its pursuing a goal. With all the above-mentioned factors combined, one might think that the AI model has some sort of consciousness, even if there is literally “zero evidence” of the chatbot doing so.
Microsoft’s AI head and co-founder of Google’s DeepMind division says that the arrival of Seemingly Conscious AI is “inevitable and unwelcome”, which is why we need to treat AI as a helpful companion and evade its predatory illusions.
Story continues below this ad
And while Suleyman’s concerns may sound logical, many AI experts often disagree with his views. For example, the company behind Claude – Anthropic has been hiring researchers to study AI welfare and has even dedicated a research program. As a result of its initiative, the company has added a new feature to its AI chatbot Claude that allows the AI chatbot to end conversations with humans who often resort to violent and abusive language.