Undergraduate Honour's Theses Projects
This is the list of current and past Bachelor's Theses projects co-supervised or supervised by Prof Mariana Shimabukuro.
This is the list of current and past Bachelor's Theses projects co-supervised or supervised by Prof Mariana Shimabukuro.
"Hike: AI-Powered Assistant for University Course Planning and Program Requirements". Anupriya Dubey. In progress.
Navigating the complexities of course selection at universities can be a daunting task for students, often resulting in inefficiencies, frustration, and missed opportunities. At Ontario Tech University (OTU), students face challenges in accessing comprehensive course information, understanding degree requirements, and receiving personalized academic guidance.
To address these issues, this paper proposes Hike: AI-Powered Assistance for University Course Planning, a GPT-driven application designed to simplify the course selection process. By leveraging OpenAI's GPT API, Hike offers an interactive, chat-based system that provides students with real-time, personalized recommendations, taking into account their academic history, preferences, and career aspirations. It integrates course catalogs, degree requirements, and prerequisites, streamlining the decision-making process while enhancing the student experience.
Additionally, Hike supports academic advisors by automating responses to common queries, allowing them to focus on more personalized advising, and assists faculty in understanding course demand trends and curriculum changes. By addressing the fragmented and inefficient systems currently in place, Hike aims to transform academic course planning into a seamless, intuitive, and data-driven experience, ultimately empowering students to take control of their educational journeys.
"SwipeSense: Exploring the Feasibility of Back-of-Device Swipe Interaction Using Built-In IMU Sensors". Neel Shah. In progress.
The growing dimensions of smartphones have intensified the challenges associated with screen reachability. Back-of-device (BoD) interaction expands the range of reachability and offers a promising solution to mitigate screen occlusion while enhancing one-handed interactions. However, much of the existing research relies on incorporating additional hardware components. In this paper, we present SwipeSense, a technique for exploring the feasibility of directional swipe interactions on the back of devices, utilizing built-in inertial measurement unit (IMU) sensors and machine learning models. We conducted a user study with 12 participants who performed 9600 BoD swipes in 8 distinct directions while holding the device naturally. The results of our machine learning models indicate that various directional swipes on the back of the device can be accurately distinguished using only the built-in IMU sensors of the phone, achieving a range of model accuracy between 72% and 93%. Furthermore, we showcase potential applications for these gestures.
"Piano Sight: AI-Powered App for Sight Reading". Keeran Srikugan. In progress.
To help deal with the challenge that newcomers and experienced pianists experience when learning how to sight read, the usage of Augmented Reality and Language Learning Models (LLMs) can be utilized to develop these skills. This system uses LLMs to analyze and deconstruct how well a user is able to play a sample piece present in front of them in augmented reality and will display back a list of errors in a method that is easy to understand and follow. Once the issues have been identified, users will be given pieces of advice on how to improve in those areas and be given suggestions on some activities to look at as well.
"KikuKaku: A Virtual Reality Application for Handwriting and Phonetic Association of Japanese Hiragana and Katakana Alphabets". Reese Dominguez. 2024.
Although the usage of virtual reality technologies in language learning has been explored, a majority of these applications focus on speaking. Any writing is implemented in memorization such as flashcards and item/word matching, with little opportunity to practice producing characters without tracing. This is especially a problem in languages with syllabaries or non-Roman alphabets. This thesis presents KikuKaku, a virtual reality game with a focus on mapping phonetics to writing in the Japanese language. KikuKaku is a Unity application in which users are able to interact with an item, listen to its pronunciation, and learn the item by correctly writing the word from listening. KikuKaku offers an alternative approach to familiarizing learners with sounds in Japanese and possibly other languages with non-Roman alphabets.
"MyConvoPal: A Mobile Application for Conversational English Pronunciation Training ". Bridget Green. 2024.
Conversational English is one of many challenges faced by those hoping to improve their fluency in spoken English. What makes it a subject of interest is that it presents a gap in existing pronunciation training and language learning services. This thesis presents a project made with this concept in mind, called MyConvoPal. MyConvoPal is a mobile pronunciation training application developed to provide an experience focused on conversational English, as well as other specific areas in English pronunciation that one might find challenging. Within the app, users can choose from a list of pronunciation exercises based on what they would like to practice. The games offered can be divided up into two types- word games, where individual words are practiced; and sentence games, a more advanced version of the word games where users practice pronunciation in the context of an entire sentence. In each game, the user is given a randomly selected prompt based on the area of pronunciation they have chosen to focus on. The user then records their attempt at pronouncing the prompt and is given feedback based on how well they pronounced the word or sentence. The app keeps track of how much progress the user has made while doing these exercises and allows the user to see how much they have improved in certain areas. The app also provides user customization, allowing the user to customize their experience in the exercises as well as app appearance.
"LingoComics: Situated Language Learning through Comic Style AI-generated Stories". Devel Panchal. 2024. Undergraduate Thesis Award 2024 Nominee.
There are many language learning tools available, and most of them are quite successful in their own purposes, as each offers a different learning approach. However, most of these apps focus primarily on memorization and vocabulary learning, and they lack the contextual aspect that highlights the distinction between learning a language. This thesis presents LingoComics, a situated language learning web application that uses comic-style AI-generated stories to provide a new way to learn languages. This approach leverages emotional engagement with content to enhance retention and practical language skills. LingoComics uses a simple framework of SvelteKit, Firebase, and OpenAI’s API, a web application that would be accessible to a wider audience. By offering an innovative method of contextual learning, LingoComics aims to bridge the gap between standard vocabulary acquisition and practical language usage, potentially transforming how languages are learnt.
"ASLearner: An American Sign Language Spaced-Repetition Learning App". Aron-Seth Cohen. 2023.
This document presents ASLearner, an app designed to address this gap in the market, focusing on teaching ASL through a minimalist, accessible, flashcard-like lesson system. Utilizing evidence-based learning and adaptive learning techniques, the app aims to use spaced-repetition to enhance vocabulary retention with ASL. ASLearner was developed primarily using Flutter and Firebase, with multi-platform compatibility and the potential to accommodate other sign languages in the future. By providing an alternative approach to learning ASL, ASLearner enriches the landscape of language learning applications.
"Card-IT Versus: A Competitive Multiplayer Game for Testing Italian Verb Morphology". Shawn Yama. 2022.
This thesis presents a gamified extension to Card-IT called Card-IT Versus which aims to engage multiple students to learn together through competing in verb conjugation games. The games are based on the “Conjugate” and “Identify Tense” quiz types from the base Card-IT system. In the gamified version, players are to correctly answer a series of flashcard questions as fast as possible to maximize their game scores. These flashcard questions would be based on deck(s) that a player has organized for themselves in the base Card-IT system. During the game players can obtain a variety of 14 items that either assist them or sabotage one of their opponent’s game’s. Items are a key element in Card-IT Versus as they are designed to make the experience frantic and help players who are not performing as well. Once all players go through all questions, the player who gained the most points wins the match. In order to implement this multiplayer functionality, new React.JS components and a Socket.IO server were added to Card-IT.