Article 4ZY6Q Gyrfalcon Acceleration Chips Speed SolidRun AI Inference Server

Gyrfalcon Acceleration Chips Speed SolidRun AI Inference Server

by
staff
from High-Performance Computing News Analysis | insideHPC on (#4ZY6Q)
solidrun-150x146.jpg

Today SolidRun introduced a new Arm-based AI inference server optimized for the edge. Highly scalable and modular, the Janux GS31 supports today's leading neural network frameworks and can be configured with up to 128 Gyrfalcon Lightspeeur SPR2803 AI acceleration chips for unrivaled inference performance for today's most complex video AI models. "While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability."

The post Gyrfalcon Acceleration Chips Speed SolidRun AI Inference Server appeared first on insideHPC.

External Content
Source RSS or Atom Feed
Feed Location http://insidehpc.com/feed/
Feed Title High-Performance Computing News Analysis | insideHPC
Feed Link https://insidehpc.com/
Reply 0 comments