Article 5ABGH Amazon begins shifting Alexa’s cloud AI to its own silicon

Amazon begins shifting Alexa’s cloud AI to its own silicon

by
Jim Salter
from Ars Technica - All content on (#5ABGH)

Amazon engineers discuss the migration of 80 percent of Alexa's workload to Inferentia ASICs in this three-minute clip.

On Thursday, an Amazon AWS blogpost announced that the company has moved most of the cloud processing for its Alexa personal assistant off of Nvidia GPUs and onto its own Inferentia Application Specific Integrated Circuit (ASIC). Amazon dev Sebastien Stormacq describes the Inferentia's hardware design as follows:

AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

When an Amazon customer-usually someone who owns an Echo or Echo dot-makes use of the Alexa personal assistant, very little of the processing is done on the device itself. The workload for a typical Alexa request looks something like this:

  1. A human speaks to an Amazon Echo, saying: "Alexa, what's the special ingredient in Earl Grey tea?"
  2. The Echo detects the wake word-Alexa-using its own on-board processing
  3. The Echo streams the request to Amazon data centers
  4. Within the Amazon data center, the voice stream is converted to phonemes (Inference AI workload)
  5. Still in the data center, phonemes are converted to words (Inference AI workload)
  6. Words are assembled into phrases (Inference AI workload)
  7. Phrases are distilled into intent (Inference AI workload)
  8. Intent is routed to an appropriate fulfillment service, which returns a response as a JSON document
  9. JSON document is parsed, including text for Alexa's reply
  10. Text form of Alexa's reply is converted into natural-sounding speech (Inference AI workload)
  11. Natural speech audio is streamed back to the Echo device for playback-"It's bergamot orange oil."

As you can see, almost all of the actual work done in fulfilling an Alexa request happens in the cloud-not in an Echo or Echo Dot device itself. And the vast majority of that cloud work is performed not by traditional if-then logic but inference-which is the answer-providing side of neural network processing.

Read 2 remaining paragraphs | Comments

index?i=_n2eofcSsMA:tcsbvEuYKgQ:V_sGLiPB index?i=_n2eofcSsMA:tcsbvEuYKgQ:F7zBnMyn index?d=qj6IDK7rITs index?d=yIl2AUoC8zA
External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments