Article 6GMX1 Making Sense Of Nvidia's Supernic

Making Sense Of Nvidia's Supernic

by
janrinok
from SoylentNews on (#6GMX1)

Arthur T Knackerbracket has processed the following story:

Nvidia has given the world a "SuperNIC" - another device to improve network performance, just like the "SmartNIC," the "data processing unit" (DPU), and the "infrastructure processing unit" (IPU). But the GPU-maker insists its new device is more than just a superlative.

So what exactly is a SuperNIC? An Nvidia explainer describes it as a "new class of networking accelerator designed to supercharge AI workloads in Ethernet-based networks." Key features include high-speed packet reordering, advanced congestion control, programmable I/O pathing, and, critically, integration with Nvidia's broader hardware and software portfolio.

If that sounds like what a SmartNIC or DPU would do, you're not wrong. The SuperNIC is even based on a current Nvidia DPU, the BlueField-3.

Nvidia's BlueField-3 SuperNIC promises Infiniband-ish network performance - if you buy Nvidia's fancy 51.2Tbit/sec switches - Click to enlarge. Source: Nvidia.

The difference is the SuperNIC is designed to work alongside Nvidia's own Spectrum-4 switches as part of its Spectrum-X offering.

Nvidia's senior veep for networking, Kevin Deierling, emphasized in an interview with The Register that the SuperNIC isn't a rebrand of the DPU, but rather a different product.

Before considering the SuperNIC, it's worth remembering that SmartNICs/IPUs/DPUs are network interface controllers (NICs) that include modest compute capabilities - sometimes fixed-function ASICs, with or without a couple of Arm cores sprinkled in, or even highly customizable FPGAs.

Many of Intel and AMD's SmartNICs are based around FPGAs, while Nvidia's BlueField-3 class of NICs pairs Arm cores with a bunch of dedicated accelerator blocks for things like storage, networking, and security offload.

This variety means that certain SmartNICs are better suited, or at the very least marketed, towards certain applications more than others.

For the most part, we've seen SmartNICs - or whatever your preferred vendor wants to call them - deployed in one of two scenarios. The first is in large cloud and hyperscale datacenters where they're used to offload and accelerate storage, networking, security, and even hypervisor management from the host CPU.

Amazon Web Services' custom Nitro cards are a prime example. The cards are designed to physically separate the cloudy control plane from the host. The result is that more CPU cycles are available to run tenants' workloads.

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments