Another victory of Deep Mind: after chess and artificial intelligence of subdued StarCraft

In November 2017, ie a little over a year ago, we wrote that the AI ​​is not yet able to overcome the professional players in StarCraft. But in less than a year, and this barrier proved to be taken. Last month, the London team from the British division of artificial intelligence research DeepMind quietly lay a new cornerstone in the confrontation of people and computers. On Thursday, she opened this achievement in a three-hour streamers on YouTube, in which people and robots fighting for life and death.

Another victory of Deep Mind: after chess and artificial intelligence of subdued StarCraft

DeepMind defeated people of StarCraft

Broadcast DeepMind showed that its robot with artificial intelligence AlphaStar wins a professional player in a challenging real-time strategy (RTS) StarCraft II. Champion of Humanity, 25-year-old Grzegorz Komints from Poland, flew with a score of 5: 0. It looks like the software for machine learning strategies revealed, unknown to professionals who compete for millions of dollars in prize money, which are issued each year in one of the most lucrative for the world of e-sports games.

Another victory of Deep Mind: after chess and artificial intelligence of subdued StarCraft

"It was not like any one of StarCraft, in which I played," said Komints, known under the nickname of professional MANA.

The feat DeepMind is the most difficult in a long chain of events that computers impose the best of the world's people in the game and who wins. Checkers fell in 1994, chess in 1997, AlphaGo won the first game in 2016. Robot for StarCraft - the most powerful player in the world of artificial intelligence; and waited for his arrival. AlphaStar appeared about six years ago in the history of machine learning. Although AlphaGo stunning victory was in 2016 - the first experts thought that this moment would come at least ten years later - a victory AlphaStar seems more or less arrived on schedule. By now it is clear that a sufficient amount of data and computing power of machine learning can cope with difficult but specific problems.

Marc Riedl, assistant professor at the Georgia Institute of Technology have found exciting news Thursday, but not spectacular. "We have already reached that point, so it was only a matter of time. In a sense, to win people to the games become boring. "

Video games like StarCraft mathematically more complicated than chess or go. Number of valid positions for the first board is a unit 170 with zeros, and equivalent StarCraft estimated as 1 to 270 zeros, no less. Creation and management of military units in StarCraft requires the selection of players and perform many other actions, as well as the adoption of decisions without being able to see your opponent's every move.

DeepMind predolel these steep barriers with the help of powerful TPU chips that Google invented to increase the power of machine learning. The company has adapted the algorithms developed for the treatment of the text under the task of determining the action on the field of battle that lead to victory. AlphaStar trained in StarCraft on records half a million games between people, then played constantly evolving with clones of himself in a virtual league that is a kind of digital evolution. Most bots are introduced in this league, accumulating experience, the equivalent of 200 years of gameplay. AlphaStar, which beat MANA, is not all-powerful. At the moment, the robot can only play for one of the three races available in StarCraft. In addition to the inhumanly long experience of the game, DeepMind also differently perceives this game. He sees everything that happens in the game, odnvoremenno while MaNa had to move around the map to see what's going on. AlphaStar also has higher control precision and targeting units than the person who owns the computer mouse, the computer at the time of reaction and lower than that of a professional gamer.

Despite these flaws, Riedel and other experts welcomed the work of the entire DeepMind. "It was very impressive," said Tang Jie, a researcher at the independent research institute AI OpenAI, working on bots that play in Dota 2, the most profitable for e-sports game in the world. Such tricks with video may have potentially beneficial side effects. Algorithms and code that OpenAI used for the development of Dota in the past year, with varying degrees of success have been adapted to make robots more nimble hands.

Nevertheless, AlphaStar illustrates the limitations of the current niche systems, machine learning, says Julian Togelius, a professor at New York University and author of a recent book about the games and artificial intelligence. Unlike its human opponent, the new champion DeepMind can not play at full strength on different game maps or different races of aliens in the game without long additional training. Also, he can not play checkers, chess, or earlier versions of StarCraft. This inability to handle even a little surprise is the problem for many anticipated AI applications such as autonomous vehicles or robots adapted to the researchers called a general artificial intelligence (AGI, OUI). More significant battle between man and machine can be a sort of decathlon, with board games, video games and ending in Dungeons and Dragons.

Restrictions highly specialized artificial intelligence, it seemed apparent when MaNa played in a demonstration game against AlphaStar, which was limited to viewing maps on the type of person, one square at a time. DeepMind data have shown that it is almost as good as the one that beat MaNa in five games.

New boat quickly assembled an army powerful enough to crush his opponent-human, but MaNa used clever maneuvers and the experience of defeat, to keep the forces of AI. The delay gave him time to gather his troops and win.

To find more interesting news, please visit us at Zen.