Microsoft AI chief Mustafa Suleyman is against the idea of ‘conscious AI’; here’s why | Technology News


Microsoft’s AI chief Mustafa Suleyman has a warning for AI developers and researchers who are working on projects aimed at building conscious AI.

“I don’t think that is work people should be doing,” Suleyman told CNBC in an interview last week. “If you ask the wrong question, you end up with the wrong answer. I think it’s totally the wrong question.”

Suleyman believes that while artificial intelligence may achieve some form of superintelligence, it is nearly impossible for it to develop the human emotional experience required for true consciousness. At the end of the day, any “emotional” experience that AI appears to have is merely a simulation, he says.

Story continues below this ad

“Our physical experience of pain makes us feel sad and terrible, but AI doesn’t feel sadness when it experiences ‘pain,’” Suleyman explained. “It’s simply generating the perception, the seeming narrative of experience, of self, and of consciousness, but that’s not what it’s actually experiencing.”

“It would be absurd to pursue research that investigates that question, because they’re not conscious, and they can’t be,” Suleyman added.

Can AI be conscious?

Scientists, philosophers, and even the general public are divided on the question of whether AI can be conscious. Some believe that consciousness is an inherently biological trait specific to brains and comes naturally to humans. Others argue that consciousness can be achieved in machines through algorithms, regardless of whether the system performing these computations is made of neurons, silicon, or any other physical substrate, a view known as computational functionalism.

In 2022, Google suspended software engineer Blake Lemoine after he claimed that AI chatbots could feel emotions and potentially suffer.

Story continues below this ad

In November 2024, Kyle Fish, an AI welfare officer at Anthropic, co-authored a report suggesting that AI consciousness could be a realistic possibility in the near future. He also told The New York Times that he believed there was a 15 percent chance that chatbots are already conscious.

Suleyman, who also co-founded Google DeepMind, has repeatedly warned against the notion of “conscious AI.” He is concerned that a widespread belief in AI consciousness could create new ethical dilemmas. If people begin to treat AI as a friend, partner, or confidant, some may argue that AI models deserve rights of their own.

“The arrival of seemingly conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions,” Suleyman wrote in a blog post.

He argues that AI cannot truly be conscious and that the illusion of consciousness may lead to interactions “rich in feeling and experience,” a phenomenon sometimes referred to as ‘AI psychosis’ in cultural discussions.

Story continues below this ad

The truth is, no one fully understands what consciousness is, let alone how to measure it. However, the deeper issue lies in our overreliance on technology and our growing dependence on AI systems like ChatGPT. The evolution of this technology and its profound social impact continue to shape the debate on machine consciousness.





Source link

Leave a Reply