Abdelkareem Bedri

About

I'm a PhD student in the Human-Computer Interaction Institute at Carnegie Mellon University. I'm a Qualcomm and a KAAYIA fellow. I'm advised by Professor Mayank Goel. I received a master's degree in Computer Science from Georgia Institute of Technology where I was advised by Professor Thad Starner and Professor Gregory Abowd. My research interests are in activity recognition, wearable computing, mobile health and human-computer interaction.

bedri@cmu.edu, Smart Sensing for Humans Lab (SMASH-LAB) Wean 4120.

Selected Publications

The full list can be found on my Scholar page

EarBit

Chronic and widespread diseases such as obesity, diabetes, and hypercholesterolemia require patients to monitor their food intake, and food journaling is currently the most common method for doing so. However, food journaling is subject to self-bias and recall errors, and is poorly adhered to by patients. In this paper, we propose an alternative by introducing EarBit, a wearable system that detects eating moments. We evaluate the performance of inertial, optical, and acoustic sensing modalities and focus on inertial sensing, by virtue of its recognition and usability performance. Using data collected in a simulated home setting with minimum restrictions on participants’ behavior, we build our models and evaluate them with an unconstrained outside-the-lab study. For both studies, we obtained video footage as ground truth for participants activities. Using leave-one-user-out validation, EarBit recognized all the eating episodes in the semi-controlled lab study, and achieved an accuracy of 90.1% and an F1-score of 90.9% in detecting chewing instances. In the unconstrained, outside-the-lab evaluation, EarBit obtained an accuracy of 93% and an F1-score of 80.1% in detecting chewing instances. It also accurately recognized all but one recorded eating episodes. These episodes ranged from a 2 minute snack to a 30 minute meal.

SkinWire

Current wearable form factors often house electronics using an enclosure that is attached to the body. This form factor, while wearable, tends to protrude from the body and therefore can limit wearability. While emerging research in on-skin interfaces from the HCI and wearable communities have generated form factors with lower profiles, they often still require support by conventional electronics and associated form factors for the microprocessor, wireless communication, and battery units. In this work, we introduce SkinWire, a fabrication approach that extends the early work in on-skin interfaces to shift wearable devices from their traditional box-like forms to a fully self-contained on-skin form factor.

Detecting Mastication

This paper presents an approach for automatically detecting eating activities by measuring deformations in the ear canal walls due to mastication activity. These deformations are measured with three infrared proximity sensors encapsulated in an off-the-shelf earpiece. To evaluate our method, we conducted a user study in a lab setting where 20 participants were asked to perform eating and non-eating activities. A user dependent analysis demonstrated that eating could be detected with 95.3% accuracy. This result indicates that proximity sensing offers an alternative to acoustic and inertial sensing in eating detection while providing benefits in terms of privacy and robustness to noise

TapSkin

The touchscreen has been the dominant input surface for smartphones and smartwatches. However, its small size compared to a phone limits the richness of the input gestures that can be supported. We present TapSkin, an interaction technique that recognizes up to 11 distinct tap gestures on the skin around the watch using only the inertial sensors and microphone on a commodity smartwatch. An evaluation with 12 participants shows our system can provide classification accuracies from 90.69% to 97.32% in three gesture families--number pad, d-pad, and corner taps. We discuss the opportunities and remaining challenges for widespread use of this technique to increase input richness on a smartwatch without requiring further on-body instrumentation.

Silent Speech

We address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a user’s medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and jaw movements during silent speech. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the user’s tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in a set of earmolds. We collected a data set of 1901 utterances of 11 distinct phrases silently mouthed by six able-bodied participants. Recognition relies on using hidden Markov modelbased techniques to select one of the 11 phrases. We present encouraging results for user dependent recognition.

Sign Language Recognition

In Automatic Sign Language Recognition (ASLR), robust hand tracking and detection is key to good recognition accuracy. We introduce a new dataset of depth data from continuously signed American Sign Language (ASL) sentences. We present analysis showing numerous errors of the Microsoft Kinect Skeleton Tracker (MKST) in cases where hands are close to the body, close to each other, or when the arms cross. We also propose a method based on domain-driven random forest regression, which predicts real world 3D hand locations using features generated from depth images. We show that our hand detector (DDRFR) has >20% improvement over the MKST within a margin of error of 5 cm from the ground truth.

Enhancing haptic representation for 2D objects

This paper considers two dimensional object tracking and recognition using combined texture and pressure cues. Tactile object recognition plays a major role in several HCI applications especially scientific visualization for the visually impaired. A tactile display based on a pantograph mechanism and 2×2 vibrotactile unit array was used to represent haptic virtual objects. Two experiments were carried out to evaluate the efficiency and reliability of this novel method. The first experiment studied how the combined texture and pressure cues improves the haptic interaction compared to using pressure cues only. The second experiment examined users' capability to track and recognize objects with two dimensional shapes using this method. Results obtained indicate an improvement in boundary recognition and tracking accuracy by the combined stimulation over the sole pressure stimulation. Results also demonstrate the capability of users to readily define shapes for most of the virtually represented objects.