Google artificial intelligence creates its own AI ‘child’
Machine-made programme is more accurate than human-made systems
Google’s AutoML artificial intelligence (AI) system has created its own “fully-functional AI child” that’s capable of outperforming its human-made equivalents, reports Alphr.
The computer-made system, known as NASNet, is designed to identify objects, such as people and vehicles, in photographs and videos, the search engine giant says.
Studies show that NASNet is able to identify objects in an image with 82.7% accuracy. Google says this is an improvement of 1.2% over AI programmes created by humans.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
The web giant has made the system “open source”, which means developers from outside the company can either expand upon the programme or develop their own version.
Researchers at Google say they hope AI developers will be able to build on these models to address “multitudes of computer vision problems we have not yet imagined.”
While the AI-made programme appears to be harmless in its current guise, Alphr says significant advances in its technology could have “dangerous implications.”
The website says AI systems could, for instance, develop their own “biases” and spread them onto other machines.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
But the Daily Express says tech giants Facebook and Apple have joined the Partnership on AI to Benefit People and Society, a group that aims to implement strategies that “only allow AI to be developed if it will benefit humanity.”
The newspaper reports that Google’s engineering chief, Ray Kurzweil, also believes AI could cause problems for mankind in the future.
He says humanity will experience “difficult episodes” before AI can be used to benefit civilisation.
-
'What's profitable today is not unification. It's segmentation.'Instant Opinion Opinion, comment and editorials of the day
-
GPT-5: Not quite ready to take over the worldFeature OpenAI rolls back its GPT-5 model after a poorly received launch
-
When does a personal loan make sense?the explainer Personal loans tend to be more flexible and versatile than home, auto or student loans
-
GPT-5: Not quite ready to take over the worldFeature OpenAI rolls back its GPT-5 model after a poorly received launch
-
Google avoids the worst in antitrust rulingSpeed Read A federal judge rejected the government's request to break up Google
-
Deep thoughts: AI shows its math chopsFeature Google's Gemini is the first AI system to win gold at the International Mathematical Olympiad
-
The jobs most at risk from AIThe Explainer Sales and customer services are touted as some of the key jobs that will be replaced by AI
-
Why AI means it's more important than ever to check terms and conditionsIn The Spotlight WeTransfer row over training AI models on user data shines spotlight on dangers of blindly clicking 'Accept'
-
Are AI lovers replacing humans?Talking Points A third of Gen Z singles use tech as a 'romantic companion'
-
Palantir: The all-seeing tech giantFeature Palantir's data-mining tools are used by spies and the military. Are they now being turned on Americans?
-
Grok brings to light wider AI antisemitismIn the Spotlight Google and OpenAI are among the other creators who have faced problems