Workshop on Anchoring Machine Learning in Classical Algorithmic Theory
29 April 2022 | Virtual | ICLR2022
Recent advances in Machine Learning (ML) have revolutionized our ability to solve complex problems in a myriad of application domains. Yet, just as empirical data plays a fundamental role in the development of such applications, the process of designing these methods has also remained empirical: we have learned which of the known methods tend to perform better for certain types of problems, and have developed intuition guiding our discovery of new methods.
In contrast, classical algorithmic theory provides tools directly addressing the mathematical core of a problem, and clear theoretical justifications motivate powerful design techniques. At the heart of this process is the analysis of the correctness and time/space efficiency of an algorithm, providing actionable bounds and guarantees. Problems themselves may be characterized by bounding the performance of any algorithm, providing a meaningful reference point to which concrete algorithms may be compared. While ML models may appear to be an awkward fit for such techniques, some research in the area has succeeded in obtaining results with the “definitive” flavor associated to algorithms, complementary to empirical ones. Are such discoveries bound to be exceptions, or can they be part of a new algorithmic theory?
The GoundedML workshop seeks to bring together researchers from both the algorithmic theory and machine learning communities, starting a dialogue on how ideas from theoretical algorithm design can inspire and guide future research in ML. We group the topics that the workshop is concerned with into four general areas of study.
Framework to study time complexity bounds of ML models (e.g., design speedups that are asymptotically significant).
Framework for studying the accuracy of a model with respect to an optimal solution.
Negative results: what are the things a given model cannot do?
Sample complexity, actionable statistical guarantees, extrapolation capacity.
Complexity theory: study the inherent complexity class of a problem.
Bounds on learning capacity (expressivity) of certain model design paradigms for a given problem (e.g., is a given architecture capable of learning the type of correlation?).
Bounds on the very idea of porting algorithmic theory methods to machine learning (e.g., can we prove that some of the goals of the GroundedML workshop are hopeless?).
Model design paradigms: specialized methods for designing algorithms from formal problem specifications (e.g., dynamic programming).
Standardized techniques for asymptotically significant speedups.
General techniques for algorithmic alignment, modularity, and compositionality of ML models.
Design of data structures that lend well for ML consumption (e.g., to speed up computation).
We welcome papers that propose benchmarks or datasets that are of relevance to algorithmic alignment.
News and Updates
[December 7, 2021] Workshop proposal accepted!
[February 4, 2022] Submissions site open!
[April 4, 2022] List of accepted papers published!
We solicit paper submissions on algorithmic guarantees, characterization, and design paradigms of machine learning algorithms. Papers will be peer reviewed under double-blind policy and the submission deadline is 26th February 2022 at 12:00 AM UTC. Accepted papers will be presented at the poster session, some as orals and one paper will be awarded as the best paper.
Submit your paper on CMT