Scalable Bootstrapping for Python

ABSTRACT
High-level productivity languages such as Python, Matlab, and R are popular choices for scientists doing data analysis.  However, for today’s increasingly large datasets, applications written in these languages may run too slowly, if at all.  In such cases, an experienced programmer must typically rewrite the application in a less-productive performant language such as C or C++, but this work is intricate, tedious, and often non-reusable. To bridge this gap between programmer productivity and performance, we extend an existing framework that uses just-in-time code generation and compilation. This framework uses the SEJITS methodology, (Selective Embedded Just-In-Time Specialization [11]), converting programs written in domain-speci c embedded languages (DSELs) to programs in languages suitable for high performance or parallel computation.

We present a Python DSEL for a recently developed, scalable bootstrapping method; the DSEL executes efficiently in a distributed cluster. In previous work [16], Prasad et al. created a DSEL compiler for the same DSEL (with minor differences) to generate OpenMP or Cilk code. In this work, we create a new DSEL compiler which instead emits code to run on Spark [18], a distributed processing framework. Using two example applications of bootstrapping, we show that the resulting distributed code achieves near-perfect strong scaling from 4 to 32 eight-core computers (32 to 256 cores) on datasets up to hundreds of gigabytes in size. With our DSEL, a data scientist can write a single program in serial Python that can run \toy” problems in plain Python, non-toy problems tting on a single computer in OpenMP or Cilk, and non-toy problems with large datasets on a multi-computer Spark installation.