DeepMind's New AI Masters Games Without Even Being Taught the Rules
AnonTechie writes:
DeepMind's New AI Masters Games Without Even Being Taught the Rules: (Javascript required)
The folks at DeepMind are pushing their methods one step further toward the dream of a machine that learns on its own, the way a child does.
The London-based company, a subsidiary of Alphabet, is officially publishing the research today, in Nature, although it tipped its hand back in November with a preprint in ArXiv. Only now, though, are the implications becoming clear: DeepMind is already looking into real-world applications.
DeepMind won fame in 2016 for AlphaGo, a reinforcement-learning system that beat the game of Go after training on millions of master-level games. In 2018 the company followed up with AlphaZero, which trained itself to beat Go, Chess and Shogi, all without recourse to master games or advice. Now comes MuZero, which doesn't even need to be shown the rules of the game.
The new system tries first one action, then another, learning what the rules allow, at the same time noticing the rewards that are proffered-in chess, by delivering checkmate; in Pac-Man, by swallowing a yellow dot. It then alters its methods until it hits on a way to win such rewards more readily-that is, it improves its play. Such learning by observation is ideal for any AI that faces problems that can't be specified easily. In the messy real world-apart from the abstract purity of games-such problems abound.
So, how far do you think this approach can advance ?
Read more of this story at SoylentNews.