Article 4FGCH Optical neural network at 50zJ per op? Nope, but it’s still a good idea

Optical neural network at 50zJ per op? Nope, but it’s still a good idea

by
Chris Lee
from Ars Technica - All content on (#4FGCH)
GettyImages-926536690-800x450.jpg

Enlarge (credit: BeeBright/Getty Images)

Artificial intelligence (AI) has experienced a revival of pretty large proportions in the last decade. We've gone from AI being mostly useless to letting it ruin our lives in obscure and opaque ways. We've even given AI the task of crashing our cars for us.

AI experts will tell us that we just need bigger neural networks and the cars will probably stop crashing. You can get there by adding more graphics cards to an AI, but the power consumption becomes excessive. The ideal solution would be a neural network that can process and shovel data around at near-zero energy cost, which may be where we are headed with optical neural networks.

To give you an idea of the scale of energy we're talking about here, a good GPU uses 20 picoJoules (1pJ is 10-12J ) for each multiply and accumulate operation. A purpose-built integrated circuit can reduce that to about 1pJ. But if a team of researchers is correct, an optical neural network might reduce that number to an incredible 50 zeptoJoules (1zJ is 10-21J).

Read 15 remaining paragraphs | Comments

index?i=CGM0s7WrpYo:R-x0CeJNs-Y:V_sGLiPB index?i=CGM0s7WrpYo:R-x0CeJNs-Y:F7zBnMyn index?d=qj6IDK7rITs index?d=yIl2AUoC8zA
External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments