Article 4EDPA Podcast: Accelerating AI Inference with Intel Deep Learning Boost

Podcast: Accelerating AI Inference with Intel Deep Learning Boost

by
staff
from High-Performance Computing News Analysis | insideHPC on (#4EDPA)
jason2-144x150.jpg

In this Chip Chat podcast, Jason Kennedy from Intel describes how Intel Deep Learning Boost works as an embedded AI accelerator in the CPU designed to speed deep learning inference workloads. "The key to Intel DL Boost - and its performance kick - is augmentation of the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit."

The post Podcast: Accelerating AI Inference with Intel Deep Learning Boost appeared first on insideHPC.

External Content
Source RSS or Atom Feed
Feed Location http://insidehpc.com/feed/
Feed Title High-Performance Computing News Analysis | insideHPC
Feed Link https://insidehpc.com/
Reply 0 comments