Using GPU resources¶
Many computational problems can be greatly sped up by using GPU resources, in many cases by a factor of 10 or more. Broadly speaking, this is particularly true for operations which are performed over large arrays, where the entire operation or some parts of it are local to some part of the array. Typical examples include elementwise arithmetic, matrix multiplication, integral calculus, tensor arithmetic, etc. NVIDIA provides an API, CUDA, which allows users to relatively easily write scientific computing code for GPU:s in a C-like programming environment. There are also libraries for CUDA, such as cuTensor, cuSparse, and cuBLAS, which provide access to tensor algebra, linear algebra and sparse linear algebra, respectively. Moreover, there are many libraries which leverage CUDA to provide GPU access through higher-level programming languages, such as Python.