Podcast: Accelerating AI Inference with Intel Deep Learning Boost
by staff from High-Performance Computing News Analysis | insideHPC on (#4EDPA)
In this Chip Chat podcast, Jason Kennedy from Intel describes how Intel Deep Learning Boost works as an embedded AI accelerator in the CPU designed to speed deep learning inference workloads. "The key to Intel DL Boost - and its performance kick - is augmentation of the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit."
The post Podcast: Accelerating AI Inference with Intel Deep Learning Boost appeared first on insideHPC.