Article 69PR2 Optical Algorithm Simplifies Analog AI Training

Optical Algorithm Simplifies Analog AI Training

by
Mordechai Rorvig
from IEEE Spectrum on (#69PR2)
curved-flowing-blue-lights-on-a-black-ba

Researchers have developed a range of analog and other unconventional machine learning systems in the expectation that they will prove vastly more energy efficient than today's computers. But training these AIs to do their tasks has been a big stumbling block. Researchers at NTT Device Technology Labs and the University of Tokyo now say they've come up with a training algorithm (announced by NTT last month) that goes a long way toward letting these systems meet their promise.

Their results, established on an optical analog computer, represent progress towards obtaining the potential efficiency gains that researchers have long sought from unconventional" computer architectures.

Modern AI programs use a biologically-inspired architecture called an artificial neural network to execute tasks like image recognition or text generation. The strength of connections between artificial neurons, which control the outputs of the computation, must be modified or trained using standard algorithms. The most prominent of these algorithms is called backpropagation, which updates the connection strengths to reduce the network's errors, while it processes trial data. Because adjustments to some parameters depend on adjustments to others, there is a need for active information passing and routing by the computer.

As Spectrum has elsewhere explained, Error backpropagation is like running inference in reverse, moving from the last layer of the network back to the first layer; weight update then combines information from the original forward inference run with these backpropagated errors to adjust the network weights in a way that makes the model more accurate."

Alternative computing architectures, which trade complexity for efficiency, often cannot perform the information passing required by the algorithm. As a consequence, the trained parameters of the network must be obtained from an independent physics simulation of the entire hardware setup and its information processing. But creating simulations of sufficient quality can itself be challenging.

We found that it was very hard and tough to apply backpropagation algorithms to our device," said Katsuma Inoue of NTT Device Technology Labs, one of the researchers involved in the study. There always existed a gap between the mathematical model and the real device, owing to several factors, such as physical noise and inaccurate modeling."

The difficulty of implementing backpropagation led the authors to study and implement an alternative training algorithm. It builds on an algorithm called direct feedback alignment (DFA), first introduced in a paper from 2016. That algorithm reduced the need to pass information during training and therefore the extent to which the physical system needs to be simulated. The authors' new augmented DFA" algorithm entirely removes the need for any detailed device simulation.

To study and test the algorithm, they implemented it on an optical analog computer. In it, the connections between neurons are represented as intensities of light traveling through a ring of optical fiber instead of as digitally represented numbers. The connections of the neural network are represented with the intensities in a light beam that is passed through a ring-shaped optical fiber.

It's an absolutely essential demonstration," said Daniel Brunner of the FEMTO-ST Institute, a French public research organization. Brunner develops photonic unconventional computers of a similar sort used by the researchers in the study. The beauty of this particular algorithm is that it is not too difficult to implement in hardware-which is why this is so important."

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments