NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential
Besides the new GeForce RTX 2080 series being attractive for developers wanting to make use of new technologies like RTX/ray-tracing, mesh shaders, and DLSS (Deep Learning Super Sampling), CUDA and OpenCL benchmarking so far on the GeForce RTX 2080 Ti is yielding impressive performance -- even outside of the obvious AI / deep learning potential workloads with the Turing tensor cores. Here are some benchmarks looking at the OpenCL/CUDA performance on the high-end Maxwell, Pascal, and Turing cards as well as an AMD Radeon RX Vega 64 for reference. System power consumption, performance-per-Watt, and performance-per-dollar metrics also round out this latest Ubuntu Linux GPU compute comparison.