Optimized broadcast for deep learning workloads on dense-GPU InfiniBand clusters: MPI or NCCL?
Proceedings of the 25th European MPI Users' Group Meeting, 2018•dl.acm.org
Traditionally, MPI runtimes have been designed for clusters with a large number of nodes.
However, with the advent of MPI+ CUDA applications and dense multi-GPU systems, it has
become important to design efficient communication schemes. This coupled with new
application workloads brought forward by Deep Learning frameworks like Caffe and
Microsoft CNTK pose additional design constraints due to very large message
communication of GPU buffers during the training phase. In this context, special-purpose …
However, with the advent of MPI+ CUDA applications and dense multi-GPU systems, it has
become important to design efficient communication schemes. This coupled with new
application workloads brought forward by Deep Learning frameworks like Caffe and
Microsoft CNTK pose additional design constraints due to very large message
communication of GPU buffers during the training phase. In this context, special-purpose …
Traditionally, MPI runtimes have been designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and dense multi-GPU systems, it has become important to design efficient communication schemes. This coupled with new application workloads brought forward by Deep Learning frameworks like Caffe and Microsoft CNTK pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NCCL have been proposed. In this paper, we propose a pipelined chain (ring) design for the MPI_Bcast collective operation along with an enhanced collective tuning framework in MVAPICH2-GDR that enables efficient intra-/internode multi-GPU communication. We present an in-depth performance landscape for the proposed MPI_Bcast schemes along with a comparative analysis of NCCL Broadcast and NCCL-based MPI_Bcast. The proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement, compared to NCCL-based solutions, for intra- and internode broadcast latency, respectively. In addition, the proposed designs provide up to 7% improvement over NCCL-based solutions for data parallel training of the VGG network on 128 GPUs using Microsoft CNTK. The proposed solutions outperform the recently introduced NCCL2 library for small and medium message sizes and offer comparable/better performance for very large message sizes.
ACM Digital Library
Showing the best result for this search. See all results