China's AI Industry Barely Slowed By US Chip Export Rules
Export controls imposed by the U.S. on microchips, aiming to hinder China's technological advancements, have had minimal effects on the country's tech sector. While the restrictions have slowed down variants of Nvidia's chips for the Chinese market, it has not halted China's progress in areas like AI, as the reduced performance is still an improvement for Chinese firms, and researchers are finding ways to overcome the limitations. Reuters reports: Nvidia has created variants of its chips for the Chinese market that are slowed down to meet U.S. rules. Industry experts told Reuters the newest one - the Nvidia H800, announced in March - will likely take 10% to 30% longer to carry out some AI tasks and could double some costs compared with Nvidia's fastest U.S. chips. Even the slowed Nvidia chips represent an improvement for Chinese firms. Tencent Holdings, one of China's largest tech companies, in April estimated that systems using Nvidia's H800 will cut the time it takes to train its largest AI system by more than half, from 11 days to four days. "The AI companies that we talk to seem to see the handicap as relatively small and manageable," said Charlie Chai, a Shanghai-based analyst with 86Research. Part of the U.S. strategy in setting the rules was to avoid such a shock that the Chinese would ditch U.S. chips altogether and redouble their own chip-development efforts. "They had to draw the line somewhere, and wherever they drew it, they were going to run into the challenge of how to not be immediately disruptive, but how to also over time degrade China's capability," said one chip industry executive who requested anonymity to talk about private discussions with regulators. The export restrictions have two parts. The first puts a ceiling on a chip's ability to calculate extremely precise numbers, a measure designed to limit supercomputers that can be used in military research. Chip industry sources said that was an effective action. But calculating extremely precise numbers is less relevant in AI work like large language models where the amount of data the chip can chew through is more important. [...] The second U.S. limit is on chip-to-chip transfer speeds, which does affect AI. The models behind technologies such as ChatGPT are too large to fit onto a single chip. Instead, they must be spread over many chips - often thousands at a time -- which all need to communicate with one another. Nvidia has not disclosed the China-only H800 chip's performance details, but a specification sheet seen by Reuters shows a chip-to-chip speed of 400 gigabytes per second, less than half the peak speed of 900 gigabytes per second for Nvidia's flagship H100 chip available outside China. Some in the AI industry believe that is still plenty of speed. Naveen Rao, chief executive of a startup called MosaicML that specializes in helping AI models to run better on limited hardware, estimated a 10-30% system slowdown. "There are ways to get around all this algorithmically," he said. "I don't see this being a boundary for a very long time -- like 10 years." Moreover, AI researchers are trying to slim down the massive systems they have built to cut the cost of training products similar to ChatGPT and other processes. Those will require fewer chips, reducing chip-to-chip communications and lessening the impact of the U.S. speed limits.
Read more of this story at Slashdot.