Article 6MKZ1 Alternative Clouds Are Booming As Companies Seek Cheaper Access To GPUs

Alternative Clouds Are Booming As Companies Seek Cheaper Access To GPUs

by
BeauHD
from Slashdot on (#6MKZ1)
An anonymous reader quotes a report from TechCrunch: CoreWeave, the GPU infrastructure provider that began life as a cryptocurrency mining operation, this week raised $1.1 billion in new funding from investors, including Coatue, Fidelity and Altimeter Capital. The round brings its valuation to $19 billion post-money and its total raised to $5 billion in debt and equity -- a remarkable figure for a company that's less than 10 years old. It's not just CoreWeave. Lambda Labs, which also offers an array of cloud-hosted GPU instances, in early April secured a "special purpose financing vehicle" of up to $500 million, months after closing a $320 million Series C round. The nonprofit Voltage Park, backed by crypto billionaire Jed McCaleb, last October announced that it's investing $500 million in GPU-backed data centers. And Together AI, a cloud GPU host that also conducts generative AI research, in March landed $106 million in a Salesforce-led round. So why all the enthusiasm for -- and cash pouring into -- the alternative cloud space? The answer, as you might expect, is generative AI. As the generative AI boom times continue, so does the demand for the hardware to run and train generative AI models at scale. GPUs, architecturally, are the logical choice for training, fine-tuning and running models because they contain thousands of cores that can work in parallel to perform the linear algebra equations that make up generative models. But installing GPUs is expensive. So most devs and organizations turn to the cloud instead. Incumbents in the cloud computing space -- Amazon Web Services (AWS), Google Cloud and Microsoft Azure -- offer no shortage of GPU and specialty hardware instances optimized for generative AI workloads. But for at least some models and projects, alternative clouds can end up being cheaper -- and delivering better availability. On CoreWeave, renting an Nvidia A100 40GB -- one popular choice for model training and inferencing -- costs $2.39 per hour, which works out to $1,200 per month. On Azure, the same GPU costs $3.40 per hour, or $2,482 per month; on Google Cloud, it's $3.67 per hour, or $2,682 per month. Given generative AI workloads are usually performed on clusters of GPUs, the cost deltas quickly grow. "Companies like CoreWeave participate in a market we call specialty 'GPU as a service' cloud providers," Sid Nag, VP of cloud services and technologies at Gartner, told TechCrunch. "Given the high demand for GPUs, they offers an alternate to the hyperscalers, where they've taken Nvidia GPUs and provided another route to market and access to those GPUs." Nag points out that even some Big Tech firms have begun to lean on alternative cloud providers as they run up against compute capacity challenges. Microsoft signed a multi-billion-dollar deal with CoreWeave last June to help provide enough power to train OpenAI's generative AI models. "Nvidia, the furnisher of the bulk of CoreWeave's chips, sees this as a desirable trend, perhaps for leverage reasons; it's said to have given some alternative cloud providers preferential access to its GPUs," reports TechCrunch.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments