Measurement-based quantum computing (MBQC) is considered to be a promising approach for creating practical quantum computers. This project focuses on implementing the ZX-calculus optimization technique for measurement-based quantum circuits and evaluating its effectiveness by comparing the performance of four established quantum circuits before and after applying ZX-calculus. The quantum algorithms that were selected for this project address Grover’s unstructured search, the Quantum Fourier Transform (QFT), the Quantum Approximate Optimization Algorithm (QAOA), and quantum teleportation. This project also implements the selected algorithms, as well as transpiler pipeline optimization features from IBM’s Qiskit software. Implemented circuits and optimization techniques were evaluated by running application experiments on the Brisbane QPU provided by the IBM Quantum Platform.
Student Major(s): Computer Science, Applied Mathematics
Advisor: Dr. Pieter Peers
We've developed Lageta, an educational game for teaching combinational logic, to explore the challenges with designing and deploying a game for classrooms. Researchers tested its effectiveness in a game-based assignment at increasing student interest and understanding of concepts taught over a week of lectures, and found little improvement. The development process was also long and strenuous relative to the scope of the game that was eventually produced. However, the development process and design choices of Lageta can be compared to other similar games to find how its effectiveness can be improved and its production made more efficient.
Student Major(s): Computer Science
Advisor: Dr. Yifan Sun
I has been increasingly used in mental health contexts, promising improved efficiency, accessibility, and care quality. However, little is known about how mental health counselors perceive, trust, and integrate AI into their everyday clinical practice. To address this gap, we conducted a mixed-methods study, involving a survey study with 88 licensed psychotherapists and 18 follow-up interviews. Our findings indicate counselors commonly use AI for role-play training, pre-session assessments, and homework support, while institutional HIPAA guidelines, privacy concerns, and uncertain evidence bases limit direct clinical use. Participants expressed simultaneous trust in AI’s efficiency and distrust regarding its contextual understanding and ethical safeguards. We synthesize four key tensions---augmentation vs. automation, personalization vs. privacy, efficiency vs. empathy, and enthusiasm vs. oversight---that shape counselors’ adoption trajectories. We discuss implications for policy and future HCI research focused on responsible AI integration in mental health care.
Student Major(s): Computer Science
Advisor: Dr. Janice Zhang
This project investigates how different fine-tuning approaches, full fine-tuning and QLoRA, affect the performance and efficiency of large language models (LLMs) for code generation. LLMs, such as GitHub Copilot and ChatGPT, are highly capable but computationally expensive, with significant energy costs. QLoRA, a parameter-efficient fine-tuning method, offers a potential way to reduce resource usage without severely impacting accuracy. Over eight weeks, we fine-tuned and evaluated state-of-the-art code models on standard benchmarks, comparing performance, resource requirements, and quality of generated code. Our evaluation included both functional correctness and non-functional properties such as robustness. We anticipate that QLoRA will achieve competitive results while using substantially fewer resources, highlighting its potential for sustainable AI development. This research contributes to a better understanding of trade-offs in LLM adaptation, informing practitioners and researchers on how to balance performance with efficiency in real-world software engineering applications.
Student Major(s)/Minor: Computer Science Major, Philosophy Minor
Advisor: Dr. Antonio Mastropaolo
How far can prompting alone enhance vague user bug reports into high-quality bug reports? All software contains defects or bugs, which software users experience and report to application developers through online bug reporting forms. Unfortunately, these forms often contain ambiguous, incomplete and incorrect information, which can slow down the process of fixing the bugs. GPT-BR is a system that relies on Open-AI’s Large Language Models (LLMs), to automatically enhance user bug reports. Through carefully designed prompts - that instruct the LLM to strengthen user provided information based on real application execution data - GPT-BR guarantees that the bug reports it outputs include a clear description of: the faulty application behavior or bug, the expected application behavior the user believes should replace it, and a set of detailed steps that can be performed to reproduce the bug. GPT-BR prompts were refined using real bug reports from public issue trackers like GitHub. The system’s outputs were tested against actual application information, focusing on clarity and accuracy. Although final evaluations are ongoing, early results show that GPT-BR can produce more complete, high-quality bug reports, enabling developers to diagnose and resolve issues more efficiently.
Student Major(s)/Minor: Computer Science Major; Mathematics Minor
Advisor: Dr. Oscar Chaparro
This project builds on recent research into software traceability, which helps developers understand how different parts of a software system, such as requirements and source code, are connected. Past work introduced the idea of using information theory to explain why some automated tools recover these connections better than others. My research expands on that foundation by applying the same analysis to a recent, real-world dataset called Dronology, which is a software system related to controlling unmanned aerial systems. I use machine learning models that turn text into numerical representations to identify links between software artifacts, then apply measures like entropy and information loss to explain the quality of those links. By experimenting with different ways of preparing and training the data, I explore what factors lead to better results. Early findings suggest that reducing information loss improves accuracy. These insights can help improve future traceability tools, making it easier to manage complex software systems and maintain critical connections between development artifacts.
Student Major(s)/Minor: Computer Science Major; Finance Minor
Advisor: Dr. Denys Poshyvanyk
Autonomous vehicles rely on accurate perception systems to detect and classify objects in complex driving environments. A central question in my research is whether combining LiDAR and camera data—known as sensor fusion—improves detection performance compared to using LiDAR alone. To investigate this, I reproduced a recent multimodal perception framework that integrates camera-based features with LiDAR point clouds. Using the publicly available nuScenes dataset, I designed experiments comparing a LiDAR-only baseline with the fusion model. Preliminary results suggest that fusion improves detection of large, well-defined objects such as cars and pedestrians, while performance on smaller or less visible objects, including bicycles and motorcycles, remains limited. These findings highlight both the promise and challenges of multimodal approaches in real-time perception. By clarifying when and how fusion adds value, this research contributes to ongoing efforts to develop safer, more reliable autonomous driving systems.
Student Major(s): Computer Science
Advisor: Dr. Sidi Lu