A computer program called AlphaGo Zero taught itself how to play a strategic game that is far more difficult than chess, Google found in a recent study, a feat both impressive and a little frightening.
Go is an East Asian strategy game, played on a 19-by-19 grid that has unlimited configurations, NPR explains. Researchers at Google's DeepMind lab developed a program last year using databases of known human configurations for the game, which went on to beat the best Go player on the planet, world champion Lee Sedol.
This time around, researchers at DeepMind tried something new. Instead of teaching the program, then called AlphaGo, known human configurations for the game, they let the machine discover them itself, NPR says. Scientists found that the use of human knowledge could actually impose limits on their program, and that starting with a blank slate was much more effective.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
The resulting updated program — dubbed AlphaGo Zero — spent three days playing 4.9 million games of Go against itself. Then, AlphaGo Zero beat the original human knowledge-based version 100 games to 0. AlphaGo Zero also discovered configurations that humans have not yet figured out.
While AlphaGo Zero has some far-reaching implications for the future, it remains to be studied whether this "blank slate" concept may be applied to solving complex problems. Read the full published study at Nature, or a more about the story of AlphaGo Zero at NPR.
Continue reading for free
We hope you're enjoying The Week's refreshingly open-minded journalism.
Subscribed to The Week? Register your account with the same email as your subscription.