Leveling up: DeepMind’s AlphaStar achieves Grandmaster level in StarCraft II
Enlarge / AlphaStar (Protoss, in green) dealing with flying units from the Zerg players with a combination of anti-air units (Phoenix and Archon). (credit: DeepMind)
Back in January, Google's DeepMind team announced that its AI, dubbed AlphaStar, had beaten two top human professional players at StarCraft. But as we argued at the time, it wasn't quite a fair fight. Now AlphaStar has improved on its performance sufficiently to achieve Grandmaster status in StarCraft II, using the same interface as a human player. The team described its work in a new paper in Nature.
"This is a dream come true," said DeepMind co-author Oriol Vinyals, who was an avid StarCraft player 20 years ago. "AlphaStar achieved Grandmaster level solely with a neural network and general-purpose learning algorithms-which was unimaginable ten years ago when I was researching StarCraft AI using rules-based systems."
Late last year, we reported on the latest achievements of AlphaZero, a direct descendent of DeepMind's AlphaGo, which made headlines worldwide in 2016 by defeating Lee Sedol, the reigning (human) world champion in Go. AlphaGo got a major upgrade last year, becoming capable of teaching itself winning strategies with no need for human intervention. By playing itself over and over again, AlphaZero trained itself to play Go from scratch in just three days and soundly defeated the original AlphaGo 100 games to 0. The only input it received was the basic rules of the game. Then AlphaZero taught itself to play three different board games (chess, Go, and shogi, a Japanese form of chess) in just three days, with no human intervention.
Read 10 remaining paragraphs | Comments