Videos

University of Salford: Carebot

Introduction

Carebot is a home service robot designed to serve the disabled and the elderly using cutting edge technologies for sensing and monitoring, as well as multidisciplinary interfaces for decision making and control. In Suffolk, Neil and Linda Bowles accommodated Carebot that can issue medication reminders and even call an ambulance.

Interview

As featured on the BBC 2 programme, ‘Six Robots and Us” Carebot was created by Dr Theo Theodoridis, Lecturer in Robotics at the University of Salford. We spoke to Theo about Carebot, what it can do and how far away we are to all having a robot in our own homes.

Lab Experiments

The robot’s AI include three major interfaces that allow Carebot to work as a companion, monitor, and assistant. The companion model implements the robot’s personality allowing natural language communication with people, which makes it very approachable and friendly as it can discuss about anything including latest news, sports, technology, science, horoscope, weather and more. The monitor model enables the robot to track and follow humans, detect falls, gas leaks and blackouts, make phone calls/messages under emergency scenarios, and provide medication reminders on a regular basis. Finally, the assistant model can be used to fetch and carry goods in any location around the house. To do this, the robot comprises an agile robotic arm with a large gripper attached to it as the end effector, from which objects up to 300g can be picked. For heavier objects, up to 1kg, Carebot can be commanded to deploy its serving tray.

University of Salford: Miscellaneous Technologies

The Living Lab

The University of Salford accommodates an experimental environment called the Living Lab, which is a flat-looking (studio) laboratory equipped with a small kitchen with modern appliances, bed and sofa, a large TV and several sensors that monitor the state of the environment and the user(s) condition who live in it. The lab has been customised with a head monitoring system and various types of mobile robots in order to aid elderly and disabled users. The lab accommodates the following technologies: The Avatar, a supervisory monitor used to detect user poses and falls and to coordinate the robots; the Teddy robot, equipped with physiological sensors to monitor the condition of the user and provide emergency services; the Butler robot, a medication announcer and dispenser system; and the Nurse robot, used to determine user illness and assess health condition.

Intelligent Control Interfaces for a Robotic Wheelchair

Assistive interfaces designed for the elderly or the disabled mainly focus on prosthetic robots or exoskeletons. However, in this project we introduce 3 interfaces that enable a wheelchair to be controlled either autonomously or via muscle (EMG), brain (EEG) signals. The autonomous control interface uses a 3-zone vector based obstacle avoidance method for safe navigation. The muscle control interface uses a single EMG channel sensor and a Gaussian-based control method to navigate the wheelchair using 4 patterns. The brain control interface uses a forehead-based EEG sensor and a Gaussian classifier to model two thinking states: concentration (Beta/Gamma waves) and relaxation (Alpha/Delta/Theta waves), which are harnessed by a GUI to select driving commands.

Computer Vision Methods

This video demonstrates 6 computer vision algorithms used for robotic applications: (1) Colour Tracking: object tracking, (2) Intensity Histograms: light control, (3) Edge Detection: corridor navigation, (4) Template Matching: object recognition, (5) Visual Descriptors: region, edge, colour histograms, (6) 3D Stereo Vision: depth mapping/planning, obstacle avoidance

University of Essex

Kinect Enabled Monte Carlo Localisation for Robot Control

Proximity sensors and 2D vision methods have shown to work robustly in particle filter-based Monte Carlo Localisation (MCL). It would be interesting however to examine whether modern 3D vision sensors would be equally efficient for localising a robotic wheelchair with MCL. In this work, we introduce a visual Region Locator Descriptor, acquired from a 3D map using the Kinect sensor to conduct localisation. The descriptor segments the Kinect’s depth map into a grid of 36 regions, where the depth of each column-cell is being used as a distance range for the measurement model of a particle filter. The experimental work concentrated on a comparison of three different localization cases. (a) an odometry model without MCL, (b) with MCL and sonar sensors only, (c) with MCL and the Kinect sensor only. The comparative study demonstrated the efficiency of a modern 3D depth sensor, such as the Kinect, which can be used reliably for wheelchair localisation.

Multimodal Robot Control Interfaces

Understanding human behaviours is crucially important for developing Human-Robot Interfaces (HRIs) in scenarios related to human assistance, such as object handling and transportation, tooling, and safety control in remote areas. In this project we demonstrate the control of diverse robots using a multimodal (multi-sensor fusion) architecture, inline to human-robot (high level) control. The purpose of using such interfaces targets on the operator's flexibility, reliability, and robustness for commanding, collaborating/coordinating and controlling mobile robots. For the experimentation, we have used a custom multi-sensor apparatus that integrates voice, Electromyogramic, and inertial sensors.

Fall Detection and Condition Assessment

Fall detection addresses the elderly and the disabled where accidental falls can cause severe and sometimes non-reversible injuries. A human-assisted service mobile robot is intended to monitor people in their daily activities and validate emergency scenarios such as front, back, and side falls. The service robot is deployed in a house environment and it is equipped with a Kinect sensor designated to extract a person's skeletal model, which is used for human tracking, fall detection, and condition assessment.

Intelligent Crime Recognition Surveillance Robots

This video demonstrates the representation of multiple 3D kinematics models that resemble the limbs and head of an adult human subject. The subject performs 4 normal and 5 aggressive activities, and the spatial representation of those is shown in red and blue clusters respectively. The physical activities have been recorded by the Delsys EMG apparatus and the Vicon 3D tracker. The datasets can be found in UCI machine learning repository, and have been primarily used for physical action recognition with dynamic neural networks.

Technological Educational Institute of Piraeus

Bomb Disposal Robot: Robo-Spy

This video presents the bomb-disposal robot Robo-Spy demonstrating scenarios such as fire start, waypoint navigation, and object detection and grasping. The robot features several technical faculties and tools such as a rocket system, a fire gun, a water extinguisher, an alarm system, and a spot lighter. The robot is equipped with a CCD camera, tip and terminal micro-switches, and proximity sensors such as an ultrasonic and peripheral infrared ones. Navigation is induced by a differential traction platform, whereas a 3-DOF manipulator and a feedback gripper are being used for object manipulation.

NASA - Jet Propulsion Laboratory

Visual Navigation and Planning

Robots with limited but yet efficient sensors, such as colour cameras, could be good test beds for navigation and tracking. Utilising limited sensor resources, such as a single colour camera, was a challenging task for visual guidance and tracking applications. In this project, we solve 5 problems: (a) Data fusion using a fuzzy TSK model, (b) Visual obstacle avoidance using a monocular 5 region correlation-based stereo vision algorithm, (c) Short-term planning using a 4 state Markov model, (d) Visual object tracking using the blob colouring algorithm, and (d) Way-point navigation using a velocity model. Future application of this kind will focus on clearance scenarios where small rovers will be enabled for clearing an environment from rocks, rubbish, etc.

Multimodal Robot Control Interfaces

This video demonstrates a multimodal sensor interface that consists of a microphone (voice control), an Electromyogramic channel (muscle control), a 9-axis Inertial Measurement Unit (inertial control), and a Joystick (manual control). The sensor modalities are being used for reliable and assistive robot control via sensor fusion and replacement when other sensors fail. The apparatus is connected to a nano Windows 7 PC, and robot control is induced via Bluetooth connection.

Hand Gesture-Based Rover Control

Teleoperated reactive control is the implementation of a consistent mapping between directed sensor input commands (operator's wearable device) to control multiple outputs. The JPL's biosleeve was one of the main sensors used for the classification of finger and hand-based gestures, directed to control rovers on tracks. This involved an ensemble of Machine Learning classifiers, as well as a dimensionality reduction method (Sparse-PCA), which takes place in a pre-classification phase. The purpose of using Sparse-PCA is the need to reduce hardware resources (EMG channels) to the minimum possible, sustaining high classification accuracy for quality control.

Multi-Robot Gesture Control Using BNF Grammars

Beyond single gesture classification for robot control, an alternative approach is the structuring of a gesture language that consists of hand signs composing gesture syntaxes. Such syntaxes of gestures (expressions) are primarily classified and then composed as sentences using a gesture grammar interpreter (BNFs scheme). Such gesture sequences are being used for commanding robot teams, from which, the human operator can select an individual robot, a team, or the whole group or robots. This method suggest an entirely new and innovative way of using syntaxes of gestures for robot control, and instigates a whole new field of "quite communication". Relevant application can be seen in people with disabilities as well as in the army.

EMG Prosthesis Robot Control

While reactive control of a small scale data or input variables is considered as a primitive method for driving or controlling robots, it is however a significantly practical way in prosthesis control. In this project, we have attempted to not only classify an enormous dataset, but also to test and reveal the importance of using powerful yet good classifier systems. A DARPA RIC Chicago dataset taken from amputees chest was used for controlling prosthetic robot manipulators. The dataset consists of 128 variables (mono-polar EMG sensors), 19 classes, and some ten thousands of instances per class. The purpose of carrying out this research is to aid the disabled such as amputees and paraplegics, so as allowing them to gain independency in their daily activities.

2D Terrain Classification

This video demonstrates a 2D terrain classification method based on a Radial Basis Function network and Region Locator Descriptors. Classification of images is induced using the down-looking camera of a UAV. Once a terrain is recognised the UAV is programmed to fly to the next terrain. Each terrain constitutes an aerial way-point where the UAV needs to traverse.

Aerial Image Stitching

This video demonstrates a path building method based on aerial image stitching. The path is built progressively on a mosaic using the down-looking camera of a UAV.

StereoVisual Odometry

This video demonstrates an odometry method based on stereo vision. The trajectory is built progressively on an x, y, and z frame of coordinates using a stereo camera mounted on a UAV.