Scaling Deep Learning Algorithms on Extreme Scale Architectures
by Rich Brueckner from High-Performance Computing News Analysis | insideHPC on (#3336A)
Abhinav Vishnu from PNNL gave this talk at the MVAPICH User Group. "Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. Our results will include validation on several US supercomputer sites such as Berkeley's NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing."
The post Scaling Deep Learning Algorithms on Extreme Scale Architectures appeared first on insideHPC.