Solving large systems of linear equations on GPUs
Por:
Llano-Ríos T.F., Ocampo-García J.D., Yepes-Ríos J.S., Correa-Zabala F.J., Trefftz C.
Publicada:
1 ene 2018
Resumen:
Graphical Processing Units (GPUs) have become more accessible peripheral devices with great computing capacity. Moreover, GPUs can be used not only to accelerate the graphics produced by a computer but also for general purpose computing. Many researchers use this technique on their personal workstations to accelerate the execution of their programs and have often encountered that the amount of memory available on GPU cards is typically smaller than the amount of memory available on the host computer. We are interested in exploring approaches to solve problems with this restriction. Our main contribution is to devise ways in which portions of the problem can be moved to the memory of the GPU to be solved using its multiprocessing capabilities. We implemented on a GPU the Jacobi iterative method to solve systems of linear equations and report the details from the results obtained, analyzing its performance and accuracy. Our code solves a system of linear equations large enough to exceed the card’s memory, but not the host memory. Significant speedups were observed, as the execution time taken to solve each system is faster than those obtained with Intel® MKL and Eigen, libraries designed to work on CPUs. © Springer Nature Switzerland AG 2018.
Filiaciones:
Llano-Ríos T.F.:
Departamento de Informática y Sistemas, Universidad EAFIT, Medellín, Antioquia, Colombia
Ocampo-García J.D.:
Departamento de Informática y Sistemas, Universidad EAFIT, Medellín, Antioquia, Colombia
Yepes-Ríos J.S.:
Departamento de Informática y Sistemas, Universidad EAFIT, Medellín, Antioquia, Colombia
Correa-Zabala F.J.:
Departamento de Informática y Sistemas, Universidad EAFIT, Medellín, Antioquia, Colombia
Trefftz C.:
School of Computing and Information Systems, Grand Valley State University, Grand Rapids, MI, United States
|