Parallelization of reordering algorithms for bandwidth and wavefront reduction

KI Karantasis, A Lenharth, D Nguyen… - SC'14: Proceedings …, 2014 - ieeexplore.ieee.org
SC'14: Proceedings of the International Conference for High …, 2014ieeexplore.ieee.org
Many sparse matrix computations can be speeded up if the matrix is first reordered.
Reordering was originally developed for direct methods but it has recently become popular
for improving the cache locality of parallel iterative solvers since reordering the matrix to
reduce bandwidth and wave front can improve the locality of reference of sparse matrix-
vector multiplication (SpMV), the key kernel in iterative solvers. In this paper, we present the
first parallel implementations of two widely used reordering algorithms: Reverse Cut hill …
Many sparse matrix computations can be speeded up if the matrix is first reordered. Reordering was originally developed for direct methods but it has recently become popular for improving the cache locality of parallel iterative solvers since reordering the matrix to reduce bandwidth and wave front can improve the locality of reference of sparse matrix-vector multiplication (SpMV), the key kernel in iterative solvers. In this paper, we present the first parallel implementations of two widely used reordering algorithms: Reverse Cut hill-McKee (RCM) and Sloan. On 16 cores of the Stampede supercomputer, our parallel RCM is 5.56 times faster on the average than a state-of-the-art sequential implementation of RCM in the HSL library. Sloan is significantly more constrained than RCM, but our parallel implementation achieves a speedup of 2.88X on the average over sequential HSL-Sloan. Reordering the matrix using our parallel RCM and then performing 100 SpMV iterations is twice as fast as using HSL-RCM and then performing the SpMV iterations, it is also 1.5 times faster than performing the SpMV iterations without reordering the matrix.
ieeexplore.ieee.org
Showing the best result for this search. See all results