Goal:

To develop machine vision techniques to detect behavioral patterns involving food intake and improve real-world use of the wearable sensor

Contribution:

  1. Develop an image redaction method for privacy protection by selective content removal by semantic segmentation

  2. Develop a classification model for capturing food intake images using head orientation

  3. Use machine vision applications to autonomise food intake behavior to aid nutritional studies

Selective Content Removal for Egocentric Wearable Camera in Nutritional Studies

The wearable sensor images may be used to recognize foods being eaten, eating environment, and other behaviors and daily activities. At the same time, captured images may carry privacy concerning content such as (1) people in social eating and/or bystanders (i.e., bystander privacy); (2) sensitive documents that may appear on a computer screen in the view of AIM-2 (i.e., context privacy). We propose a novel approach based on automatic, image redaction for privacy protection by selective content removal by semantic segmentation using a deep-learning neural network.

Orientation-based food image capture for head-mounted egocentric camera

Head-mounted wearable sensors for monitoring of food intake operate by fusing multiple modalities such as inertial and image sensing. The image capture may be performed periodically, capturing a large number of irrelevant images, increasing power consumption and reducing the battery life. We propose an efficient approach to capture food images only when the head tilt angle estimated from the accelerometer data matches the behavioral pattern during the food intake.

Automatic Recognition of Food Consumption Environment

Nutritionists have established that ingestive behavior and ingestion environment are critical factors to monitor in addition to the nutrient and energy content of the diet. In this paper, we suggest an automatic method to recognize the environment in which meals and snacks are consumed into four major classes (restaurant, home, workplace, and vehicle) using the VGG16 convolutional neural network (CNN). The proposed method was developed using a dataset comprised of unrestricted ad-libitum consumption of foods by 30 participants in free-living conditions, consisting of 32,400 augmented images. The environmental recognition was performed based on sensor-detected eating episodes. We considered the impact of the camera view angle on the accuracy of recognition and evaluated environment detection in images before, during, and after eating episodes.

Reference

  • M. A. Hassan and E. Sazonov, Selective Content Removal for Egocentric Wearable Camera in Nutritional Studies, IEEE Access, 2020

  • M. A. Hassan and E. Sazonov, Orientation-based food image capture for head-mounted egocentric camera, EMBC 2019