Video: Demystifying Parallel and Distributed Deep Learning
by Rich Brueckner from High-Performance Computing News Analysis | insideHPC on (#3M9KY)
Torsten Hoefler from (ETH) Zi1/4rich gave this talk at the 2018 Swiss HPC Conference. "Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this talk, we describe the problem from a theoretical perspective, followed by approaches for its parallelization."
The post Video: Demystifying Parallel and Distributed Deep Learning appeared first on insideHPC.