Nvidia Takes the Wraps off Hopper, Its Latest GPU Architecture
After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture, a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia's Ampere architecture, with launched roughly two years ago. From a report: The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that's designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia's MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs. "Datacenters are becoming AI factories -- processing and refining mountains of data to produce intelligence," Nvidia founder and CEO Jensen Huang said in a press release. "Nvidia H100 is the engine of the world's AI infrastructure that enterprises use to accelerate their AI-driven businesses." The H100 is the first Nvidia GPU to feature dynamic programming instructions (DPX), "instructions" in this context referring to segments of code containing steps that need to be executed. Developed in the 1950s, dynamic programming is an approach to solving problems using two key techniques: recursion and memoization. Recursion in dynamic programming involves breaking a problem down into sub-problems, ideally saving time and computational effort. In memoization, the answers to these sub-problems are stored so that the sub-problems don't need to be recomputed when they're needed later on in the main problem. Dynamic programming is used to find optimal routes for moving machines (e.g., robots), streamline operations on sets of databases, align unique DNA sequences, and more.
Read more of this story at Slashdot.