Deep Learning Frameworks Get a Performance Benefit from Intel MKL Matrix-Matrix Multiplication
by Richard Friedman from High-Performance Computing News Analysis | insideHPC on (#2TMJM)
Intel(R) Math Kernel Library 2017 (Intel(R) MKL 2017) includes new GEMM kernels that are optimized for various skewed matrix sizes. The new kernels take advantage of Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) and achieves high GEMM performance on multicore and many-core Intel(R) architectures, particularly for situations arising from deep neural networks..
The post Deep Learning Frameworks Get a Performance Benefit from Intel MKL Matrix-Matrix Multiplication appeared first on insideHPC.