A dish of neurons may have taught itself to play Pong (badly)
Enlarge / In culture, nerve cells spontaneously form the structures needed to communicate with each other. (credit: JUAN GAERTNER / Getty Images)
One of the more exciting developments in AI has been the development of algorithms that can teach themselves the rules of a system. Early versions of things like game-playing algorithms had to be given the basics of a game. But newer versions don't need that-they simply need a system that keeps track of some reward like a score, and they can figure out which actions maximize that without needing a formal description of the game's rules.
A paper released by the journal Neuron takes this a step further by using actual neurons grown in a dish full of electrodes. This added an additional level of complication, as there was no way to know what neurons would actually find rewarding. The fact that the system seems to have worked may tell us something about how neurons can self-organize their responses to the outside world.
Say hello to DishBrainThe researchers behind the new work, who were primarily based in Melbourne, Australia, call their system DishBrain. And it's based on, yes, a dish with a set of electrodes on the floor of the dish. When neurons are grown in the dish, these electrodes can do two things: sense the activity of the neurons above them or stimulate those electrodes. The electrodes are large relative to the size of neurons, so both the sensing and stimulation (which can be thought of as similar to reading and writing information) involve a small population of neurons, rather than a single one.