Artificial Intelligence defeats humans in first person games

Doubles of humans captured an average of 16 flags less than the doubles of bots of artificial intelligence. [Image: DeepMind]

Researchers at Google's DeepMind have created virtual players, or bots , who learned alone - without any prior instruction - to play two multiplayer 3D games in the first person.

These software agents have achieved human-like abilities, not only by playing alone, but also by cooperating to achieve a common goal.

This is a significant improvement over previous achievements of the company, which involved defeating human chess players and Go.

Artificial intelligence agents , who have learned from a machine-learning technique known as "reinforcement learning," demonstrate an uncommon ability to develop and use high-level strategies, learned independently by themselves, to compete and cooperate in the game environment.

Reinforcement learning

Reinforcement learning, a method used to train artificially intelligent agents, had already shown its potential by generating intelligent virtual players capable of navigating increasingly complex single player environments such as chess and Go.

However, the ability to compete with multiple players simultaneously, particularly games that involve teamwork and interaction among several independent players, has never been demonstrated precisely because it is something of a much higher level of complexity.

Max Jaderberg and his colleagues demonstrated the potential of their artificial intelligence bot in first-person matches in the Quake III Arena and Capture the Flag games .

In contrast to previous demonstrations in which artificial intelligence agents received prior "knowledge" about the game environment or the status of other players, this new approach ensured that each software agent learned independently from their own experience using only the that the program itself could "see" - the pixels of the screen and the score of the game.

Such a software system that is embedded in a robot would also be fed information in the same way, since the cameras provide just pixels.

The bots only learned by analyzing the pixels of the screens and the score of the game. [Image: Jaderberg et al. - 10.1126 / science.aau6249]

Defeating or cooperating with humans

Placed against each other, a population of artificial intelligence agents learned to play engaging in thousands of matches in randomly generated environments. According to the researchers, over time agents have independently developed surprisingly high-level strategies, similar to those used by skilled human players in both games.

Moreover, in matches against human players, the agents outperformed human adversaries, even when the reaction times of the agents were reduced to human levels.

More than that, the agents formed teams with both other agents and other human players, and cooperated with both towards the goal of winning the match.

Programs capable of learning alone raise a host of concerns. [Image: Jaderberg et al. - 10.1126 / science.aau6249]

Technology for good and evil

The feat is remarkable and should be celebrated because one can think of innumerable possibilities of using this technology for the good of mankind.

However, there have also been warnings about the need to set boundaries for artificial intelligence , to at least try to direct the technology to an artificial intelligence of the good .

As shows begin to show up in competing situations and, potentially, in-game violence, it may be time to start considering technological advances in the area from a broader perspective.

Among the fears are the creation of murderous robots and the loss of human control over artificial intelligence , since soon computer programs will be programmed by themselves .



Bibliography:

Human-level performance in 3D multiplayer games with population-based reinforcement learning
Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, Thore Graepel Science
Vol .: 364 Issue 6443 859
DOI: 10.1126 / science.aau6249

Post a Comment

Previous Post Next Post