Recursion continues to play an important role in high-performance computing. However, parallelizing recursive algorithms while achieving high performance is nontrivial and can result in complex, hard-to-maintain code. In particular, assigning processors to subproblems is complicated by recent observations that communication costs often dominate computation costs. Previous work demonstrates that carefully choosing which divide-and-conquer steps to execute in parallel (breadth-first steps) and which to execute sequentially (depth-first steps) can result in significant performance gains over naive scheduling. Our Framework for Recursive Parallel Algorithms (FRPA) allows for the separation of an algorithm’s implementation from its parallelization. The programmer must simply define how to split a problem, solve the base case, and merge solved subproblems; FRPA handles parallelizing the code and tuning the recursive parallelization strategy, enabling algorithms to achieve high performance. To demonstrate FRPA’s performance capabilities, we present a detailed analysis of two algorithms: Strassen-Winograd and Communication-Optimal Parallel Recursive Rectangular Matrix Multiplication (CARMA). Our single-precision CARMA implementation is fewer than 80 lines of code and achieves a speedup of up to 11X over Intel’s Math Kernel Library (MKL) matrix multiplication routine on “skinny” matrices. Our double-precision Strassen-Winograd implementation, at just 150 lines of code, is up to 45% faster than MKL for large square matrix multiplications. To show FRPA’s generality and simplicity, we implement six additional algorithms: mergesort, quicksort, TRSM, SYRK, Cholesky decomposition, and Delaunay triangulation. FRPA is implemented in C++, runs in shared-memory environments, uses Intel’s Cilk Plus for task-based parallelism, and leverages OpenTuner to tune the parallelization strategy.
2D Accelerators Algorithms Architectures Arrays Big Data Bootstrapping C++ Cache Partitioning Cancer Careers Chisel Communication Computer Architecture CTF DIABLO Efficiency Energy FPGA GAP Gaussian Elimination Genomics GPU Hardware HLS Lower Bounds LU Matrix Multiplication Memory Multicore Oblivious Open Space OS Parallelism Parallel Reduction Performance PHANTOM Processors Python Research Centers RISC-V SEJITS Tall-Skinny QR Technical Report Test generation