Nvidia Disputes Intel’s Maching Learning Performance Claims
by Rich Brueckner from High-Performance Computing News Analysis | insideHPC on (#1QP43)
"Few fields are moving faster right now than deep learning," writes Buck. "Today's neural networks are 6x deeper and more powerful than just a few years ago. There are new techniques in multi-GPU scaling that offer even faster training performance. In addition, our architecture and software have improved neural network training time by over 10x in a year by moving from Kepler to Maxwell to today's latest Pascal-based systems, like the DGX-1 with eight Tesla P100 GPUs. So it's understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software."
The post Nvidia Disputes Intel's Maching Learning Performance Claims appeared first on insideHPC.