Profiling the BLAST bioinformatics application for load balancing on high-performance computing clusters
BMC bioinformatics, 2022•Springer
Abstract Background The Basic Local Alignment Search Tool (BLAST) is a suite of
commonly used algorithms for identifying matches between biological sequences. The user
supplies a database file and query file of sequences for BLAST to find identical sequences
between the two. The typical millions of database and query sequences make BLAST
computationally challenging but also well suited for parallelization on high-performance
computing clusters. The efficacy of parallelization depends on the data partitioning, where …
commonly used algorithms for identifying matches between biological sequences. The user
supplies a database file and query file of sequences for BLAST to find identical sequences
between the two. The typical millions of database and query sequences make BLAST
computationally challenging but also well suited for parallelization on high-performance
computing clusters. The efficacy of parallelization depends on the data partitioning, where …
Background
The Basic Local Alignment Search Tool (BLAST) is a suite of commonly used algorithms for identifying matches between biological sequences. The user supplies a database file and query file of sequences for BLAST to find identical sequences between the two. The typical millions of database and query sequences make BLAST computationally challenging but also well suited for parallelization on high-performance computing clusters. The efficacy of parallelization depends on the data partitioning, where the optimal data partitioning relies on an accurate performance model. In previous studies, a BLAST job was sped up by 27 times by partitioning the database and query among thousands of processor nodes. However, the optimality of the partitioning method was not studied. Unlike BLAST performance models proposed in the literature that usually have problem size and hardware configuration as the only variables, the execution time of a BLAST job is a function of database size, query size, and hardware capability. In this work, the nucleotide BLAST application BLASTN was profiled using three methods: shell-level profiling with the Unix “time” command, code-level profiling with the built-in “profiler” module, and system-level profiling with the Unix “gprof” program. The runtimes were measured for six node types, using six different database files and 15 query files, on a heterogeneous HPC cluster with 500+ nodes. The empirical measurement data were fitted with quadratic functions to develop performance models that were used to guide the data parallelization for BLASTN jobs.
Results
Profiling results showed that BLASTN contains more than 34,500 different functions, but a single function, RunMTBySplitDB, takes 99.12% of the total runtime. Among its 53 child functions, five core functions were identified to make up 92.12% of the overall BLASTN runtime. Based on the performance models, static load balancing algorithms can be applied to the BLASTN input data to minimize the runtime of the longest job on an HPC cluster. Four test cases being run on homogeneous and heterogeneous clusters were tested. Experiment results showed that the runtime can be reduced by 81% on a homogeneous cluster and by 20% on a heterogeneous cluster by re-distributing the workload.
Discussion
Optimal data partitioning can improve BLASTN’s overall runtime 5.4-fold in comparison with dividing the database and query into the same number of fragments. The proposed methodology can be used in the other applications in the BLAST+ suite or any other application as long as source code is available.
Springer
Showing the best result for this search. See all results