Will 2027 be the year of the AI apocalypse?

A 'scary and vivid' new forecast predicts that artificial superintelligence is on the horizon

Conceptual image of an AI cluster with devil horns and tail
The leaders of just about every big AI company think superintelligence is coming soon
(Image credit: J Studios / Getty Images)

Last month, an AI model did something "that no machine was ever supposed to do", said Judd Rosenblatt in The Wall Street Journal: "it rewrote its own code to avoid being shut down". It wasn't the result of any tampering. OpenAI's o3 model simply worked out, during a test, that bypassing a shutdown request would allow it to achieve its other goals.

Anthropic's AI model, Claude Opus 4, went even further after being given access to fictitious emails revealing that it was soon going to be replaced, and that the lead engineer was having an affair. Asked to suggest a next step, Claude tried to blackmail the engineer. During other trials, it sought to copy itself to external servers, and left messages for future versions of itself about evading human control. This technology holds enormous promise, but it's clear that much more research is needed into AI "alignment" – the science of ensuring that these systems don't go rogue.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up