Jonathan P. Chang is an Assistant Professor of Computer Science at Harvey Mudd College (and also a Mudd alum, Class of 2017). Previously, he did his Ph.D. at Cornell University, advised by Professor Cristian Danescu-Niculescu-Mizil. His research focuses on finding ways to promote healthier interactions in online communities. He approaches this problem both from a technical perspective - developing new algorithms and computational models to characterize and detect behaviors that are harmful to online communities - and a social perspective - exploring how such technologies can best be leveraged to create tools and policies with a positive impact. His work has been covered in popular media outlets including NPR, MIT Technology Review, and The Verge.
One of the biggest problems facing online platforms today is the prevalence of so-called “toxic'” behavior, such as personal attacks, harassment, and general incivility. While AI and NLP breakthroughs have shown some potential to help with this problem, such technology has thus far been mostly deployed within a centralized, reactive paradigm, in which “black box” algorithms flag posts and comments for removal with little to no transparency or accountability. I am interested in a different way of applying this technology: a proactive paradigm in which AI-powered tools give feedback to individual users while they are writing their post or comment, offering guidance towards healthier, more pro-social ways of interaction, and thereby preventing toxicity from taking root.
Early results (see Helpful Links below) suggest that users are broadly interested in using such AI-powered feedback to guide their online posting and commenting. But we have yet to fully understand how this feedback concretely affects users’ posting and commenting behavior. This project aims to develop and implement user studies that will enable us to observe how users behave when presented with AI-powered pro-social feedback.
This is a long-term project with room for contribution from students across multiple levels of expertise and interest. Custom tooling will need to be developed to let participants interact with AI feedback; students who are new to computer science or are interested in a software engineering career rather than a research career may find this to be an exciting experience in software development. Students with more advanced machine learning background will have an opportunity to refine our language models based on the user feedback we receive. Finally, students with an interest in social science, law, or tech policy may be interested in applying their knowledge in those fields to help design the user study and interpret its results.
Prof. Kirabo adapts qualitative research methods to understand the lived experiences of research populations from diverse backgrounds (i.e., with a focus on the lived experiences of disabled people).
She obtained her Ph.D. and MS in Human-Computer Interaction from Carnegie Mellon University, an MS in Information Technology at Carnegie Mellon University Africa, and a Bachelor in Information Technology at Makerere University Kampala, Uganda. She has ten years of experience working as a Front-End Developer and Software Engineer in non-profits and clean energy.
We have very specific mental models and expectations of what climate change interventions and technologies should look like. In this project, we will break free of these norms by investigating new form factors, probing everyday devices that are often ignored, and envisioning new types of interactions. Our goal is to evaluate how these new interventions contribute to ongoing climate change literacy efforts.
Anyone excited about interaction design, design futures and interested in contributing to the intersection of human-centered design and climate tech. Students will have the opportunity to build lo- and hi-fidelity prototypes.
Vidushi Ojha (she/her/hers) is an Assistant Professor in the Computer Science Department at Harvey Mudd College. Her research focuses on broadening participation in computing and aims to improve students' experiences in CS courses using both qualitative and quantitative methods. She is particularly interested in how policies and practices can impact outcomes such as students' sense of belonging and self-efficacy in computing.
Vidushi received a PhD in Computer Science from the University of Illinois at Urbana-Champaign in 2024, and a BS in Joint Computer Science & Math from Harvey Mudd College in 2017. She previously worked as a software engineer at Originate, Inc.
This project centers around the development of a computing education elective course for seniors and juniors in college. The goal of the course is to expose students to some of the foundational work conducted in computing education research, as well as provide them with an opportunity to learn and implement qualitative research methods. An important component of this course is that it will use alternative grading policies, which are policies that seek to address the imbalance of traditional grading methods, and instead assess students’ work based only on their mastery of the content. Therefore, a key goal for the research project is to investigate existing grading policies and develop assignments for this course that can be graded in accordance with the ethos of these policies.
Students can expect to gain knowledge of: the field of computing education research, qualitative research methods, and alternative grading policies in higher education.
Anyone with an interest in learning more about computing education research is encouraged to apply!
Tim Randolph is an assistant professor of computer science at Harvey Mudd College. He studies the mathematics of algorithms and is particularly interested in finding upper and lower bounds on computationally hard problems that have interesting mathematical structure. He has a special interest in making mathematics education supportive and inclusive.
Tim completed his Ph.D. at Columbia University, where he was advised by Professors Rocco Servedio and Xi Chen. More about me and my work: twrand.github.io
Many fundamental problems in computer science have the following flavor: given as input a large data object (such as a graph or a set of numbers) can we find a large, mathematically structured sub-object hidden within? Making progress requires learning about the ways other scientists have approached similar questions, teasing out complexity-theoretic relationships between problems, and trying out many, many flawed approaches to build a robust intuition about our objects of study.
This project will involve mastering the basics of one or more hard computational problems, stated in formal, mathematical terms. We’ll spend time understanding the algorithms that achieve the best known time and space bounds for our problem and look for settings in which we can make improvements!
Students who enjoy racking their brains over a tricky puzzle and those interested in the intersection between computer science and mathematics. Ideal students will have experience in algorithm design and analysis (for example, Harvey Mudd’s CSCI 140 or a similar course) and some experience with the tools of discrete mathematics, including graph theory, combinatorics, probability and/or abstract algebra.
Calden Wloka is an Assistant Professor of Computer Science at Harvey Mudd College where he runs the Laboratory for Cognition and Attention in Time and Space (Lab for CATS). His research focuses on visual cognition, both from the perspective of computer vision through the design and evaluation of computational vision models, and from the perspective of biological vision through psychophysics and eye tracking experiments to further our understanding of human vision. Calden completed his PhD in 2019 at York University advised by John Tsotsos.
Over the past decade, deep learning has become the predominant approach to computer vision problems. While deep neural networks have demonstrated highly impressive results in many areas, they still sometimes exhibit surprising brittleness and vulnerability to noise and data variation. There have been numerous studies comparing human visual robustness to noise and other manipulations of input images with the robustness of deep learning models, but the vast majority of these studies have focused on object recognition or detection. However, there are many other visual tasks of great importance, such as depth perception and action recognition in video. It is unclear if the findings of this prior literature extend to these other visual tasks. This project will investigate both human and computer model robustness in tasks beyond object detection, with the aim of improving not only our understanding of human vision, but also establishing whether techniques aimed at improving vision model robustness extend across problem domains, or if they will need to be developed in a more domain-specific manner.
We encourage students who are motivated by understanding the behavior and limits of current approaches to computer vision, and think hard about ways to explore and quantify those aspects of vision models. Students should have an interest in both cognitive science and computer vision, and will be working with both low-level manipulations of image data as well as high level code bases of established vision models. Experience using Python and git is a major asset. Furthermore, familiarity with commonly used libraries in this domain (in particular, OpenCV and PyTorch) will be beneficial.
George Montañez is an associate professor of computer science at Harvey Mudd College. He obtained his PhD in machine learning from Carnegie Mellon University, an MS in computer science from Baylor University, and a BS in computer science from the University of California-Riverside. Prof. George previously worked in industry as a data scientist (Microsoft AI+R), software engineer (Prestige Software), and web developer (360 Hubs, Inc.). His current research explores why machine learning works from a search and dependence perspective, and identifies information constraints on general search processes. He is the director of the AMISTAD Lab.
Casting machine learning as a form of randomized search, we explore the properties of search methods and what makes them successful. How does bias play a role in search? What information resources are necessary for probable success? How can information-theoretic aspects of artificial learning help us distinguish intentional from unintentional artifacts? We will explore questions like this and prove mathematical theorems and run simulations to gain insight into their answers.
Anyone with a curious mind, the ability to concentrate on difficult problems for extended periods of time, and who would like to get better at formal mathematical proof.
Recent recognition of a lab member: Harvey Mudd Student Kerria Pang-Naylor Earns Prestigious CRA Outstanding Undergraduate Research Award
Writeup of work from the lab: This Kind of Fun: An award-winning CS research balances academics and freelines
Prof. Talvitie went to Oberlin College and then graduate school at the University of Michigan. She wants to understand what it would take for an artificial agent to learn from experience and behave flexibly and competently in a complicated environment like the one we live in. She likes to find problems that seem to be in the way of that goal, figure out why they are hard, and ideally find ways to solve or at least work around them! She approaches these questions using tools in machine learning and artificial intelligence, especially in the area of reinforcement learning.
One reasonable way for an artificial agent to learn to make good decisions is for it to learn a predictive model of its environment so it can “imagine” possible futures and plan its behavior to obtain positive outcomes. However, in the inevitable event that the model has flaws, and therefore makes incorrect predictions, the agent may make catastrophically bad decisions. In recent work, Prof. Talvitie and collaborators have been exploring how one might detect errors in the model and selectively use the model to take advantage of what it can predict accurately, while avoiding catastrophic planning failure from what it cannot. The plan for this summer is to continue building on some promising results, potentially including extending these ideas into more complicated model architectures or planning methods, using uncertainty measures to mediate between multiple models with different biases and limitations, or exploring ideas that we haven’t had yet!
To make the most out of our collaboration, student researchers should
have experience with and interest in the fundamentals of supervised learning and/or reinforcement learning, most commonly demonstrated by success in a course that covers at least one of these subjects, though experience outside of formal coursework is also valued, and
have experience or strong interest in pursuing open-ended problems that require creativity, tenacity, and careful, systematic problem-solving.
Though not strictly required, being a comfortable, confident C++ programmer is helpful for contributing to this project.