Google artificial intelligence creates its own AI ‘child’
Machine-made programme is more accurate than human-made systems
Google’s AutoML artificial intelligence (AI) system has created its own “fully-functional AI child” that’s capable of outperforming its human-made equivalents, reports Alphr.
The computer-made system, known as NASNet, is designed to identify objects, such as people and vehicles, in photographs and videos, the search engine giant says.
Studies show that NASNet is able to identify objects in an image with 82.7% accuracy. Google says this is an improvement of 1.2% over AI programmes created by humans.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
The web giant has made the system “open source”, which means developers from outside the company can either expand upon the programme or develop their own version.
Researchers at Google say they hope AI developers will be able to build on these models to address “multitudes of computer vision problems we have not yet imagined.”
While the AI-made programme appears to be harmless in its current guise, Alphr says significant advances in its technology could have “dangerous implications.”
The website says AI systems could, for instance, develop their own “biases” and spread them onto other machines.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
But the Daily Express says tech giants Facebook and Apple have joined the Partnership on AI to Benefit People and Society, a group that aims to implement strategies that “only allow AI to be developed if it will benefit humanity.”
The newspaper reports that Google’s engineering chief, Ray Kurzweil, also believes AI could cause problems for mankind in the future.
He says humanity will experience “difficult episodes” before AI can be used to benefit civilisation.
-
Grok in the crosshairs as EU launches deepfake porn probeIN THE SPOTLIGHT The European Union has officially begun investigating Elon Musk’s proprietary AI, as regulators zero in on Grok’s porn problem and its impact continent-wide
-
‘But being a “hot” country does not make you a good country’Instant Opinion Opinion, comment and editorials of the day
-
Why have homicide rates reportedly plummeted in the last year?Today’s Big Question There could be more to the story than politics
-
Claude Code: Anthropic’s wildly popular AI coding appThe Explainer Engineers and noncoders alike are helping the app go viral
-
Will regulators put a stop to Grok’s deepfake porn images of real people?Today’s Big Question Users command AI chatbot to undress pictures of women and children
-
Is social media over?Today’s Big Question We may look back on 2025 as the moment social media jumped the shark
-
Most data centers are being built in the wrong climateThe explainer Data centers require substantial water and energy. But certain locations are more strained than others, mainly due to rising temperatures.
-
The dark side of how kids are using AIUnder the Radar Chatbots have become places where children ‘talk about violence, explore romantic or sexual roleplay, and seek advice when no adult is watching’
-
Why 2025 was a pivotal year for AITalking Point The ‘hype’ and ‘hopes’ around artificial intelligence are ‘like nothing the world has seen before’
-
AI griefbots create a computerized afterlifeUnder the Radar Some say the machines help people mourn; others are skeptical
-
The robot revolutionFeature Advances in tech and AI are producing android machine workers. What will that mean for humans?