The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel. We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel. In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community! Zoom talks are held on Thursdays at 2:30 pm New York time. To find and join the 1W-MINDS slack channel, please click here.
Current Organizers (September 2025 - May 2026): Ben Adcock (Simon Fraser University), March Boedihardjo (Michigan State University), Hung-Hsu Chou (University of Pittsburgh), Diane Guignard (University of Ottawa), Longxiu Huang (Michigan State University), Mark Iwen (Principal Organizer, Michigan State University), Siting Liu (UC Riverside), Kevin Miller (Brigham Young University), and Christian Parkinson (Michigan State University).
Most previous talks are on the seminar YouTube channel. You can catch up there, or even subscribe if you like.
To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.
Passcode: the smallest prime > 100
In this talk, we argue that "inference-time" Large Language Model (LLM) operation, where we interact with these models post-training without modifying their weights, is fertile ground for information-theoretic methods. We focus on one challenge in particular: watermarking LLM-generated text. Watermarks enable authentication of text provenance and help curb misuse of machine-generated content. We present recent results establishing a close connection between LLM watermarking and coding theory, showing that classical tools such as the Plotkin bound yield fundamental limits on watermark performance. This perspective also informs the design of two practical watermarks: SimplexWater and HeavyWater. We show that these watermarks achieve high detection accuracy with minimal impact on text quality, even in low-entropy tasks such as code generation. We also briefly survey other inference-time challenges that can be addressed with information theory, such as inference-time alignment. These results illustrate a broader opportunity: as LLMs increasingly serve as black-box components of more complex systems, information and coding theory offer a principled toolkit for shaping, verifying, and controlling their outputs.