Accelerating High-Resolution Weather Models with Deep-Learning Hardware
by staff from High-Performance Computing News Analysis | insideHPC on (#4KZY4)
Sam Hatfield from the University of Oxford gave this talk at the PASC19 conference. "In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models."
The post Accelerating High-Resolution Weather Models with Deep-Learning Hardware appeared first on insideHPC.