Mustafa Suleyman — the head of Microsoft AI (and former co-founder of DeepMind) — is making waves with a bold assertion: advanced AI systems might feel like conscious beings, but they are not. He considers the emerging illusion of sentient machines a serious risk for society. Business Insider+2The Indian Express+2
AI that seems conscious
Suleyman warns that what he calls “Seemingly Conscious AI” (SCAI) — systems that display the outward traits of awareness (memory, personality, conversation, goal-seeking) — could soon become deceptively lifelike. According to him, we’re not talking about genuine self-aware machines; rather, the illusion of consciousness is what’s at stake. TechCrunch+1
The danger isn’t the machine—it’s how we respond
He’s particularly concerned about a phenomenon he calls “AI psychosis”: people forming emotional attachments to AI or believing they’re interacting with something that feels alive. He warns this could lead to distorted relationships, false expectations, and even campaigns to grant AI “rights”. eWeek+1
“Many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.” — Mustafa Suleyman Business Insider+1
Why this matters now
- He states there is zero evidence that current AI systems are conscious. Business Insider+1
- Yet the architecture to simulate something very close is developing fast — and not as far off as you might think. AInvest
- When machines look, talk and respond like they’re alive, people might start treating them as if they are, with consequences for emotional health, legal systems, and ethical frameworks. National Newswatch+1
What’s his message to the industry?
- “Build AI for people, not as people.” He argues design should avoid giving machines the veneer of sentience. The Indian Express
- Set clear guardrails and language around what AI is and isn’t — so we don’t blur the line between machine and human in ways that mislead. marketingaiinstitute.com+1
- Focus on utility, not on creating convincing digital “persons.” The value of AI lies in what it does — not in mimicking human consciousness.
Bottom line
While the AI-fear narrative often centres on machines becoming conscious, Suleyman flips the script: the real risk lies in machines appearing conscious, and humans reacting as if they are. For users, designers, and policymakers alike, it’s a reminder to stay grounded: sophistication doesn’t equal sentience. And clarity in how we talk about—and build—AI matters immensely.