Research

My PhD research currently focuses on matrix compression within machine learning algorithms. Building linear and kernel classifiers on large data can exhaust computational and memory resources. My goal is to find ways to compress large data while maintaining good classification performance.

Previously, I developed a machine learning algorithm for simultaneous non-linear classification and sparse variable selection. Simulation studies illustrate the superior classification performance when compared to common non-parametric classifiers. Moreover, we provide theoretical results on the risk of the learned classifier.

My undergraduate research focused on compressed sensing/signal processing. My coauthors and I extended results proving when sparse vectors can be effectively recovered using one-bit measurements drawn from general sub-gaussian distributions.