Article 4VM6Y Cerebras Systems' Wafer Scale Engine Deployed at Argonne National Labs

Cerebras Systems' Wafer Scale Engine Deployed at Argonne National Labs

by
martyb
from SoylentNews on (#4VM6Y)

takyon writes:

Cerebras Unveils First Installation of Its AI Supercomputer at Argonne National Labs

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world's biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what's expected to be the United States' first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

The system "is the fastest AI computer," says CEO and cofounder Andrew Feldman. He compared it with Google's TPU clusters (the 2nd of three generations of that company's AI computers), noting that one of those "takes 10 racks and over 100 kilowatts to deliver a third of the performance of a single [CS-1] box."

The CS-1 is designed to speed the training of novel and large neural networks, a process that can take weeks or longer. Powered by a 400,000-core, 1-trillion-transistor wafer-scale processor chip, the CS-1 should collapse that task to minutes or even seconds. However, Cerebras did not provide data showing this performance in terms of standard AI benchmarks such as the new MLPerf standards. Instead it has been wooing potential customers by having them train their own neural network models on machines at Cerebras.

[...] The CS-1's first application is in predicting cancer drug response as part of a U.S. Department of Energy and National Cancer Institute collaboration. It is also being used to help understand the behavior of colliding black holes and the gravitational waves they produce. A previous instance of that problem required 1024 out of 4392 nodes of the Theta supercomputer.

Also at TechCrunch, VentureBeat, and Wccftech.

Previously: Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments