Student Demo Projects
Below I list selected demo projects completed by my bachelor and master students at TU Graz. Most projects were developed as part of the Mobile Computing Lab, proposed and realized by students. More projects and updates are on the way ... stay tuned!
Master Wang Game: Human Activity Recognition and Interaction
Idea and app implementation by Dong Wang, 2024
Idea and app implementation by Dong Wang, 2024
Master Wang Game is an innovative human activity recognition and interaction game system that leverages mobile computing and ML (Apple's CoreML) to enhance interactive experiences. Designed for smartphones, the app utilizes sensors such as accelerometers and gyroscopes to develop on-device machine learning models for real-time activity classification. The system features a horizontal pixel-style game that showcases the fun and innovation of this interactive approach. The app functions in two parts: the mobile app collects motion sensor data, classifies it in real-time using machine learning models, and sends the results to a game terminal through a socket connection. The game terminal then updates the game scenes and characters based on the received action results. Master Wang Game exemplifies the integration of human-computer interaction with mobile technology, providing a unique and engaging gaming experience.
Magic: The Gathering Card Scanner
Idea and app implementation by Stefan Moser and Bernhard Vergeiner, 2024
Idea and app implementation by Stefan Moser and Bernhard Vergeiner, 2024
Magic: The Gathering (MTG) is a strategic collectible card game where players build decks from a vast array of cards, aiming to reduce their opponent’s life total to zero or achieve alternative victory conditions. Set in a rich fantasy multiverse, the game offers various formats and boasts a strong community, both physically and digitally. By 2024, MTG features over 27,000 unique cards, each with its own specific rulings which can sometimes be unclear to players. To minimize disputes and ensure smooth gameplay, we developed "MTG Card Scanner," an app that identifies cards using their unique artwork. The app utilizes a digital fingerprinting technique Perceptional Hashing. Once a card is recognized, the app displays its extended ruling text, current market price, and a link to its complete description. Users can also add cards to a local collection within the app, allowing them to track the market value of their cards. MTG Card Scanner streamlines the gaming experience by providing quick access to essential card information, enhancing both gameplay and card management for Magic: The Gathering enthusiasts.
TopOut App: AI-Powered Climbing Recording
Idea and app implementation by Filippo Orru, 2024
Idea and app implementation by Filippo Orru, 2024
TopOut is an Android app that automates video recording for bouldering using two machine learning models, freeing climbers from manually managing recordings. This allows climbers to focus entirely on their performance. TopOut automates the capture process by introducing "route visits," which automatically detect and record multiple attempts on a single route, storing them locally. The app also provides post-recording editing tools for fine-tuning recorded attempts. Designed for practical use, TopOut ensures climbers never miss valuable feedback due to recording errors. Initial tests show promising results across various climbing scenarios, making TopOut a pioneering application of artificial intelligence in enhancing bouldering training and performance analysis. By automating video capture, TopOut significantly improves the bouldering experience, enabling detailed analysis without the distraction of managing recording devices.
Citizen Weather Forecast
Idea and app implementation by Nikolaus Ostovary, 2024
Idea and app implementation by Nikolaus Ostovary, 2024
Weather forecasting significantly impacts our daily lives, influencing decisions ranging from travel plans to agricultural practices. Traditional meteorological models rely on complex physical equations and numerical simulations, which can be computationally intensive and resource-demanding. The goal of this app, "Citizen Weather Forecast," is to enable weather predictions using environmental sensors on the user's smartphone, such as temperature, humidity, and pressure. The app filters the data for usable values and increases the resolution of the weather prediction grid. Since most smartphones lack temperature and humidity sensors and only some models have pressure sensors, only pressure sensor data is used. By default, all data is prefetched from the API, and only if the sensors are available is the data substituted. The weather prediction app utilizes a Long Short-Term Memory (LSTM) neural network to provide accurate next-hour predictions.
COFFECTLY
Idea and app implementation by Theo Gasteiger, 2023
Idea and app implementation by Theo Gasteiger, 2023
Shot timing is one of the most important variables in the extraction of a great espresso. Extracting an espresso too fast leads to a light body and results in a high acidity and unpleasant taste. On the contrary, a too high extraction time leads to a heavy body and increased amount of bitter tasting notes. The Istituto Nazionale Espresso Italiano, therefore, recommends an extraction time of 25 seconds (+/-) 5 seconds. However, timing each extraction is cumbersome and time consuming. COFFECTLY is a novel iOS application that enables automatic shot timing as well as coffee brand classification, based on acceleration measurements. COFFECTLY exploits the three-axis accelerometer in iOS devices to not only detect the state of the espresso machine (pump on/off) but also to extract information about the coffee, used in the current extraction. With COFFECTLY, we are the first to provide automatic shot timing as well as coffee brand detection solely based on accelerometer measurements.
Classification: Water
Classification: Hardy
Classification: Martella
Pump state detection
Emmission Tracking App
Idea and app implementation by Adrian Opheim, 2023
Idea and app implementation by Adrian Opheim, 2023
EmTrack is a smartphone application that enables day-to-day tracking and recording of CO2-emmissions estimated based on the user activity and its duration. The project is limited to only estimating the user’s emission coming from the type of transportation. The app would detect whether the user is walking, biking, driving a car or taking a bus etc. and given the time spent in each transport mode estimate how much CO2 is emitted in the atmosphere. The transportation mode detection is accomplished by recording sensor data on an Android phone and then classifying user activity using a pre-trained LSTM model locally on the phone. The app provides information about the current user activity and the statistics of the part activities over the whole observation period. The app is lightweight and does not use the power-hungry GPS.
Deep Inference on STM32 Discovery Kit
By Peter Prodinger, 2022
By Peter Prodinger, 2022
In this demonstration, the student utilized the STM32F7 Discovery Kit, equipped with 340 kB of RAM, to investigate the influence of various optimization techniques on the performance of a machine learning model. This involved experimenting with model optimization, quantization, and compiler flags. Both convolutional and fully connected models were examined during the experimentation process. Detailed observations were made regarding the model's accuracy in predicting the first and second most probable classes, as well as instances where the model made errors. To enhance the visual representation of the demo, the large display portion of the platform was utilized. This allowed for the simultaneous display of the input image, which was a subset of the MNIST test set, and the model's performance statistics. This visual representation provided valuable insights into the model's behavior and performance.
Magic Wand Demo on Arduino Nano 33 Sense
By Rahim Entezari and David Mihola, 2021-2022
By Rahim Entezari and David Mihola, 2021-2022
This demo was developed as a component of the Embedded Machine Learning course. It showcases a deep learning model deployed on an Arduino Nano 33 Sense device with limited resources, specifically 256 kB of RAM. The model's purpose is to accurately identify spells performed by a wizard, utilizing a set of wand motions outlined in the "Beginner's guide to wand motions". The Arduino platform is equipped with an IMU sensor, and the sensor's readings serve as input to the spell recognition model. It is important to note that the proficiency of spell execution may vary among magicians. Designing and implementing a robust magic wand, while taking resource constraints into account is an art.
by Rahim, 2021
by David, 2022
BlindApp: Free path for visually impaired people
Idea and app implementation by Philip Samer, 2022
Idea and app implementation by Philip Samer, 2022
The app is designed for visually impaired people, to help them move around without pumping into objects. The app uses the main camera of the smartphone to estimate the distance to objects and notify the user if something is in front of him. The phone is placed in the front pocket of the shirt or carried around in a hand (which makes it easier to explore different directions). Headphones can be used to communicate with the user and give him an audio signal in case an obstacle is detected. To estimate the distance to the surrounding objects, the app relies on a recently published method by W. Yin et. al., "Learning to Recover 3D Scene Shape from a Single Image" to recover accurate 3D scene shape.
PlantApp: Garden plant identification and diagnostics
Idea and app implementation by Alexander Palmisano, 2022
Idea and app implementation by Alexander Palmisano, 2022
Many people have plants in their homes or gardens. Sometimes it can be challenging to identify a plant and determine whether it is healthy or has a disease. PlantApp developed by Alexander Palmisano is able to identify 14 of the most common plants (fruit and vegetable plants) that people typically have in their homes and gardens and diagnose whether a plant has a disease or not. The dataset used to train a model is the Kaggle's New Plant Diseases Dataset. A deep learning model classifies images of leaves and instantly reports the result. The model has been quantized in order to be used on a mobile phone. PlantApp also lets the user quickly scan their plants and upload the data to a cloud service in order to retrain and improve the model.
SpeechMood: Track your expressiveness
Idea and app implementation by Maximilian Nothnagel, 2022
Idea and app implementation by Maximilian Nothnagel, 2022
SpeechMood is a mobile app for Android that enables a user to record and instantly process their voice, for instance during a speech or presentation, and be presented with a prediction of the conveyed mood category based on the input audio signal. The motivation originates from the interest in the capabilities of detecting a person’s emotional state based on objectively perceived factors, such as frequencies in speech, pulse rate or other factors. SpeechMood relies on a deep network pre-trained on RAVDESS and Berlin EmoDB datasets. The model works well on test data, but several tricks had to be used to also obtain good performance on live data, as can be observed in the video. The idea of the app and the implementation are by Maximilian Nothnagel.
Activity Recognition with Transfer Learning
by Julian Rudolf and Christopher Hinterer, Jakob Soukup, Peter Prodinger, Dietmar Malli, 2020-2022
by Julian Rudolf and Christopher Hinterer, Jakob Soukup, Peter Prodinger, Dietmar Malli, 2020-2022
In this project, the students developed mobile apps to track users' activities using various methods. In the first step, a K-Nearest-Neighbour (KNN) classifier was trained to solve the problem. Then, the same task was approached by training a deep learning model (DNN), in some realizations pretrained on a well-known WISDM data set. Finally, we used on-device transfer learning (TL) to adapt the model to a new phone location and orientation by re-training it with very little data. In the videos below MCL students demonstrate the performance of their apps. Nice!
by Julian and Christopher, 2022
by Peter, 2020
by Jakob, 2022
by Dietmar, 2020
Quantle: Filler Word Detection and Counting
by Rahim Entezari and Franz Papst, 2020
by Rahim Entezari and Franz Papst, 2020
The ability to speak in public is a skill which is gaining more and more importance. Disfluencies are quite common in everyday speech. They give a speaker time to think about what to say next and help to structure spoken language. Disfluencies usually manifest themselves as utterances of filler words like “uhm”, “well” or “like”. While moderate usage of such fillers helps to sound natural, excessive use of them or long pauses indicate that the speaker lacks confidence. In this project Rahim Entezari and Franz Papst detect and count vocal fillers on a mobile phone with help of a deep neural network. They plan to add the model to Quantle to give speakers a tool to improve their public speaking skills when rehearsing a presentation!
Wireless Drum Sticks
Idea and app implementation by Markus-Philipp Gherman and Fabian Moik, 2019
Idea and app implementation by Markus-Philipp Gherman and Fabian Moik, 2019
The project implements wireless drum sticks without surface reference on conventional iPhones. The app works as follows: The stick hit detection without surface reference is based on the measurements provided by the accelerometer sensor integrated into the phone and is determined by a trained long short-term memory (LSTM) network running locally on the phone. Also the location of the hit is determined by the same LSTM and is attributed to one of the six drums on the screen. The LSTM network was trained on a manually gathered data set comprising up to 700 hits per drum location performed by multiple drum players. Detection is almost 100% accurate (ha! professional drummers get better accuracy results than beginners!). The initial location of the phone is used to calibrate to the middle of a virtual screen. Once a hit and its strength are detected, the audio is played by the phone. One can even use two phones simultaneously! Watch awesome videos below played and explained by the students. Professional drummers found the app impressively responsive and usable!
Indoor Localization using Particle Filter
2019
2019
Five teams in SS 2019 implemented indoor localization using particle filter algorithm and performed indoor localization at ITI in the Inffeldgasse 16. It's impressive how many optimizations were suggested by the students to improve their app performance, from optimized computation of wall positions to speed up computation to various ways of resampling particles. Thank you guys for your efforts and enthusiasm!