We present a machine learning technique for the algorithm selection problem, specifically focusing on algorithms for dense matrix multiplication. Dense matrix multiplication is a core component of many high-performance computing and machine learning algorithms, but the performance of matrix multiplication algorithms can vary significantly based on input parameters and hardware architecture. We build performance models for multiple machines using support vector machines (SVMs) and show that only a sparse exploration of the input space is sufficient to accurately predict the best choice of algorithm over a wide range of possible inputs. We find that by using this classifier-based approach to choose the best algorithm to use at runtime, we are able to achieve as much as a 26% increase in average performance over choosing a single algorithm a priori. This is within 1.5% of the performance possible with a perfect algorithm selector.