CS 329M: Machine Programming (2023)
Instructor: Justin Gottschlich
Stanford Course: CS 329M (3-4 credits)
Dates: September 26 - December 9 (2023)
When: Tues / Thurs 4:30-6:20pm
Building: History Corner (Building 200), room 200-002
Stanford CS 329M Lectures (September - December 2023):
[Week 05] October 22-28: Lecture 9 , <Student Presentations>
[Week 06] October 29-Nov 4: Lecture 10, Lecture 11
[Week 07] November 5-11: <Stanford Democracy Day; no class>, Lecture 12
[Week 08] November 12-18: Lecture 13, Lecture 14
[Week 09] November 19-25: <Autumn break; no class>
[Week 10] November 26-December 2: Lecture 15, Lecture 16
[Week 11] December 3-9: Lecture 17, <Student Presentations>
[Week 12] December 10-16: Finals Week (Take-home Final Exam, due midnight on Dec. 14)
Course Description
The field of machine programming (MP) is concerned with the automation of software development. Given the recent advances in software algorithms, hardware efficiency and capacity, and an ever increasing availability of code data, it is now possible to train machines to help develop software. In this course, we teach students how to build real-world MP systems. We begin with a high-level overview of the field, including an abbreviated analysis of state-of-the-art (e.g., Merly Mentor). Next, we discuss the foundations of MP and the key areas for innovation, some of which are unique to MP. We close with a discussion of current limitations and future directions of MP. This course includes a nine-week hands-on project, where students (as individuals or in a small group) will create their own MP system and demonstrate it to the class. This course is primarily intended for graduate students and is not recommended for undergraduate students (no undergraduate admittance without instructor approval).
While some overlap exists between traditional techniques to train machines to perform non-programming tasks (e.g., natural language processing, computer vision, etc.), teaching machines to perform programming-specific tasks has uniqueness in at least two dimensions. First, there are certain techniques that are more (or less) effective for MP, such as using self-supervision to learn from the large corpora of unlabeled open-source code. Second, software reasoning is fundamentally multi-dimensional; that is, there exist multiple unique ways to learn from software (e.g., static analysis, dynamic analysis, input/output specifications, program state reinforced-convergence, hardware telemetric data, etc.). In this course, we discuss each of these techniques (and others) and how they can be effectively applied to MP systems.
Prerequisites
This course is designed for graduate students. Highly talented undergraduates can be admitted with instructor approval. The following are the prerequisites for this course.
Required:
(i) Deep learning or (ii) linear algebra and mathematical maturity.
Experience with the C and C++ programming languages (PLs).
Software engineering coursework or experience.
Recommended:
Python programming experience, including experience with TensorFlow or PyTorch.
Basic understanding of PL abstractions, static and dynamic program analysis, compilers, and general machine learning techniques.
Students do not need an extensive background in machine learning, data systems, programming languages, software engineering, and compilers. The necessary aspects of these fields will be covered in the course as they become necessary. However, students with a background in these topics will likely have an easier time understanding the intuition behind some of the more advanced MP topics in the course (e.g., building semantics reasoners, program synthesis for MP data generation).
Tentative Course Syllabus
The course lectures cover the following major segments:
Introduction & Overview: lecture 1 focuses on a high-level overview of the "what" and "why" of machine programming as well as explaining how to deeply reason about technology. Lecture 2 provides insight into the three pillars of MP and the importance of programming languages and software engineering for MP.
Core: these are lectures in core areas that are foundational to most MP system. These core aspects may be disjoint (unusual) from what is required for typical machine learning systems.
Deep Data: two lectures about the various ways to reason about, generate, and utilize data for MP systems: lecture 1 discusses classical ML-based data utilization (e.g., training, validation, and testing), lecture 2 covers future ways to harness and automatically synthesize and label data for MP systems (e.g., program synthesis for data generation, dynamic execution information for analysis, automated semantics labeling, etc.).
Semantic Reasoning: two lectures on the foundations, construction, and utilization of semantic reasoning systems in MP: lecture 1 covers some of the basics found in other CS courses that are necessary for advanced reasoning about syntax and semantics, lecture 2 covers details on how to construct semantics representations and use them for downstream MP tasks.
Tentative Grading
65% project (expectation: spend 5-10 hours per week)
10% proposal: 1-2 page write-up
10% checkpoint: 10min presentation
30% report: 10 page report, appendices allowed
15% presentation: 10-15min presentation in class
25% exams (12.5% mid-term, 12.5% final)
10% assignments (3 assignments, 3.33333% each)
Bonus:
+6% attendance (taken 10mins after class starts, 18 lectures * 0.333%)
Examples of Outstanding Student Assignment Reports (2023, Autumn Quarter):
Assignment #1 from Jamil Dhanani (outstanding scientific merit, excellent formatting, style)
Assignment #1 from Martin Juan Jose Bucher (outstanding scientific merit, tier-1 conference-level formatting and style)
Reading List
Week 1:
Three Pillars of Machine Programming (Gottschlich et al.)
The Case for Learned Index Structures (Kraska et al.)
AI Programmer: Autonomously Creating Software Programs Using Genetic Algorithms (Becker and Gottschlich)
Week 2:
Aroma: Code Recommendation via Structured Code Search (Luan et al.)
Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines (Ragan-Kelley et al.)
code2vec: Learning Distributed Representations of Code (Alon et al.)
Week 3:
Automating String Processing in Spreadsheets Using Input-Output Examples (Gulwani)
Neural Code Comprehension: A Learnable Representation of Code Semantics (Ben-Nun et al., NeurIPS 2018)
Neo: A Learned Query Optimizer (Marcus et al.)
Bao: Making Learned Query Optimization Practical (Marcus et al.)
Week 4:
Learning to Represent Programs with Graphs (Allamanis et al., ICLR 2018)
MISIM: A Neural Code Semantics Similarity System Using the Context-Aware Semantics Structure (Ye et al.)
A Zero-Positive Learning Approach For Diagnosing Software Performance Regressions (Alam et al., NeurIPS 2018, video)
Self-supervised Bug Detection and Repair (Allamanis et al., NeurIPS 2021, video)
Week 5:
Evaluating Large Language Models Trained on Code (Chen et al.)
ControlFlag: A Self-Supervised Idiosyncratic Pattern Detection System for Software Control Structures (Hasabnis & Gottschlich)
Hoppity: Learning Graph Transformations to Detect and Fix Bugs in Programs (Dinella et al.)
Week 6:
Verified Lifting for Stencil Computations (Kamil et al.)
Learning Fitness Functions for Machine Programming (Mandal et al., MLSys '21)
Week 7:
Program Synthesis for Scientific Computing (Finkel et al., US Department of Energy 2021)
Learning to Represent Programs with Property Signatures (Odena and Sutton, ICLR 2020)
Week 8:
Automatically Translating Image Processing Libraries to Halide (Ahmad et al., SIGGRAPH '19)
Week 9:
Software Language Comprehension using a Program-Derived Semantics Graph (Iyer et al., NeurIPS CAP, 2020)
Week 10:
A Survey on Semantic Parsing for Machine Programming (Lee et al., KDD PLL 2021)
2022 Lectures:
Part 1: Lecture 1. Lecture 2. Lecture 3. Lecture 4. Lecture 5. Lecture 6. Lecture 7. Lecture 8. Lecture 9.
Part 2: Lecture 10. Lecture 11. Lecture 12. Lecture 13. Lecture 14 (guest lecture). Lecture 15.
Part 3: Lecture 16 (student presentations). Lecture 17 (student presentations).