'We Will Win:' OpenAI's 'Dota 2' Team Is Crushing Humans Online, But Players Are Not Giving Up
OpenAI's team of Dota 2 bots are taking on a much larger opponent after beating the pro team OG, which won more than $11 million at Dota 2'shigh-profile tournament called The International 2018. From April 18 to 21, OpenAI Five will face the internet. OpenAI, a technology non-profit founded by Elon Musk and others, is allowing any Dota 2 player to attempt to beat OpenAI Five. And just hours after the test began, human players have started winning.
We've seen AI-powered bots play-and beat-professional players before. Both OpenAI Five and DeepMind's AlphaStar have defeated professional video game players, but never in live-streamed matches. Last week's match against OG was the first time one of these AI wins has happened live. Now, we can watch OpenAI Five face a massive number of human teams in real-time. At the time of writing, nine teams have beaten OpenAI Five, giving the AI roster a 1,923-9 win/loss record, which adds up to a 99.5 percent win-rate for the bots. (One of the wins is currently under investigation; the leaderboard recorded the win from just a single player in just seven minutes.) Of course, there are 1,602 in-progress games going on at the moment, too. With information gained from the sheer number of matches played, human players are learning how to exploit OpenAI Five's weaknesses.
The first team to do so was a professional roster, Alpha Red, from Thailand, who beat the AI team in around 45 minutes, with 34 kills to 24.
"Beating OpenAI Five is a testament to human tenacity and skill," OpenAI co-founder Greg Brockman told Motherboard. "The human teams have been working together to get those wins. The way people win is to take advantage of every single weakness in Five-some coming from the few parts of Five that are scripted rather than learned-gradually build up resources, and most importantly, never engage Five in a fair fight."
Beating OpenAI Five is no easy task in Dota 2, a multiplayer online battle arena game. Working together as different heroes, Dota 2 teams need to manage their resources and upgrades to destroy the enemy team's base while still defending their own. Typically, there are more than 100 Dota 2 heroes, but the OpenAI Five challenge reduces the pool to 17, which makes things a bit simpler. (OpenAI tried to up the hero pool to 25, but it wasn't "learning fast enough" to reach a professional level before the OG match, so it was reduced to 17.) Using "deep reinforcement learning," which uses a neural network to "learn" and adapt through training. The OpenAI Five program doesn't have more information than a human does, Mark Riedl, an associate professor of AI and machine learning at the Georgia Tech College of Computing, told Motherboard in August. But the program does interpret its information "perfectly and instantaneously," which is a skill that humans lack.
So to say that Dota 2 teams taking on OpenAI Five over the weekend are up against a challenge is an understatement. The program has practiced an equivalent of 45,000 years of Dota 2 over a 10 month span of time. Players are working together in Discord servers and on Reddit to analyze weaknesses and figure out how to best exploit the program. One particularly poignant piece of encouragement was posted by Swedish Dota 2player Niklas "Wagamama" Higstrim: "The bots are locked," the player said, as transcribed on Reddit. "They are not learning, but we humans are. We will win."
Each of the won games is providing players with a ton of new information to consider. From Wagamama's game, players realized that-at least, in this particular team's situation-bots "never try to deny towers" and are bad at a Dota 2 tactic called split-pushing, which is attacking an enemies defenses from multiple lanes.
Human players are learning, but so are the humans that created OpenAI Five. OpenAI will continue to work in Dota 2, it said in a blog post. The company also plans to release "a more technical analysis" of the program once it's reviewed the OpenAI Five Arena matches.
"You can think of what we're doing as a massive-scale experiment to better understand Five," Brockman said. "Can top teams learn to beat it consistently? So far, no one's beaten it twice in a row. Can people find exploits that allow even low-ranked teams to beat it? We don't know what to expect-no one's ever run an experiment like this."