DysphagiaScan is designed to screen for swallowing difficulties in a fast, noninvasive, and clinically interpretable way. The system combines hardware and AI to detect abnormal swallows from neck acoustics. Swallows are recorded through the microphone, and then signal processed and passed through the machine learning classifier to classify the patient's risk of dysphagia.
Step 1: Capturing the swallow
The contact microphone is placed on the patient's neck to capture sounds during swallowing. The clinician starts the recording and feeds the patient different consistencies of food to swallow. Once the patient finishes swallowing, the clinician stops the recording and clicks analyze on the Graphic User Interface.
Step 2: Signal Processing
The signal is filtered using spectral gating to reduce noise by first analyzing a background segment to estimate average noise levels across frequencies. It sets frequency-specific thresholds and then applies them to the swallow audio: components below the threshold are suppressed, while those above are preserved. This selectively reduces background noise while keeping the key swallow signal intact.
Step 3: Classifying with Machine Learning
After the audio recording is signal processed, it is then analysed with the machine learning classifier that outputs the prediction per swallow and ultimately the overall prediction for the patient.
This page was made by Evan Chan