home‎ > ‎

Controls for the physically challenged using the Kinect

Home

The aim of this project is to test the feasibility of augmenting or replacing existing physical controls, such as buttons or joysticks, with virtual controls and gesture recognition. The drive for this came from the health care professionals at Beaumont College who find that some of their students struggle to operate the existing mechanical controls which they use to operate software, for example to synthesise speech.

I set off using the Microsoft Kinect.  Recently I got to borrow a Leap Motion development kit - thanks to Ming Ki Chong who was sent the kit. I can see that porting some of my Kinect work to the Leap will make for an easier real World implementation than the Kinect. More details of this work can be found here. Below is a picture of Kirsty simultaneously using the Kinect with my virtual buttons software (details below) while trying out the Leap.


Kinect programming

Using Gesturepak

The video below shows an early version of the software I've been developing. When the video starts, I demonstrate some gesture recognition which I created with the Gesturepak package. The Gesturepak software allows custom gestures to be recorded. The author of Gesturepak is working on a new version which will have an option to use the Kinect's sitting mode; useful for wheelchair users. I'm using quite large gestures and have yet to test small ones. Sometimes the gestures work reliably, sometimes not. This could be due to my coding, but I'm yet to be firmly convinced about the reliability of gesture recognition.

Virtual Buttons

I used the Coding4Fun toolkit to create 'Hover Buttons' on the hands and head. These have a timer before activation - however this timer can be set to zero for instantaneous interaction - though I find that setting it to 10ms gives more reliable triggering. For the video, the timer is set to 500ms. I'll put in a slider to allow the user to adjust the Hover Button timer. The advantage of Hover Buttons over regular Buttons is that the timer reduces the chance of accidentally activating a control.

The controls that are activated by the Hover Buttons can be relocated - this is shown in the video after the gesture recognition demo.

An additional idea is to wrap the skeleton in a cartoon character to make the software more fun to use.

Visual Studio 2012 and Kinect v1.6.0 are being used for development.

Matt's Kinect Demo

I can easily add more controls. The photo below shows a more recent version of the software with a 3D button (the oval button). This is activated when the hand is a certain distance away from the chest - so the user has to press it. The slider bar shows how far the hand is from activating the button. I spent some time to ensure that the software only tracks the closest person in the Kinect's field of vision - otherwise somebody walking by the user would hijack the skeleton.


Using sample code from Programming with the Kinect for Windows SDK

I started looking at the sample code that came with the book Programming with the Kinect for Windows Software Development Kit by David Catuhe, using the sample code as a starting point. The speech recognition is excellent and I extended the sample code a little - to toggle some more of the controls using audio commands and have the detected gestures window scroll nicely. Plus I added an audio controlled exit button to cleanly leave the program. There's a lot to get into with this book and the software it presents. The advantage of this software over GesturePak is that I'm not using a 'black box' - I have access to tweak and play with the underlying algorithms. The disadvantage is that I have to figure out how the underlying algorithms work and make it do what I want! The software works in near mode and seated mode out of the box - I hard coded these to be the default options. It will be some time until I can figure it all out. The example code has a facility for recording postures and gestures, but I've not really got this to work reliably. The hard coded gestures, using algorithms to describe their motion, work well. It looks like we can also record gestures then use pattern matching to recognise them. Still, if this was simple, GesturePak wouldn't have a market!

Naturally, the demo code from this book is professionally structured and easy to follow when compared to the kludge of cobbled together examples I've lashed up. Even so, the demo code seems to gobble up 90 odd percent of the cpu time when it is running and my homebrew mish mash trots along at about 70%. I guess that the demo code must be doing a lot more than my code that I'm not aware of.


Modified Kinect.Toolbox screen display

Using Kinectspace

I faithfully followed the installation procedure at http://code.google.com/p/simple-openni/wiki/Installation#Windows but with no joy. Perhaps it is because I am using the Kinect for PC and not the XBox version - the drivers are not recognised. I installed it all onto a Windows 7 laptop and uninstalled the Windows drivers and software. Still, it is a major hassle if the software will not run concurrently with the Microsoft drivers and with the Kinect for PC. I need to access the near mode for my target participant group. The Kinectspace software looks to be great, but until I can test it out, it is of no use to me. I'm probably doing something simple wrong.  



Comments