About

I often engage students in a range of projects involving simple robotics but those that can potentially include complex reasoning such as modelling intelligent behaviour, path planning, machine learning, collision avoidance, smart human computer interaction, conversational agents and so on.

As of now I've got Nao humanoid robots, small land and air drones, several Lego Mind Storm robot kits, Occulus rifts, Microsoft Kinect devices.

You can find the list of currently active projects below.

If you are a student (or group) interested in taking on a project feel free to contact me. Please do include your student number(s) when you do so, and a brief description of your interests including your skills and why you are the right fit for these sort of projects.

--
John Thangarajah (john.thangarajah AT rmit.edu.au)

Kinect-Storm Bot

https://0265d2da-a-62cb3a1a-s-sites.googlegroups.com/site/kinectstorm/home/IMAG0565.jpg?attachauth=ANoY7coGmRxPE9x-PYLPMit6JT34BYzMQy5tQQNc02Z87zh3BOxx9wEj6aih_uolFGXYxS5I5cPKhCrO0OPEFe-7CkKV44DWofVaKT2HSlUZNoBC8xZQtiTBsnF4slSrK9i8cNdN6u4kxMwbodXwebj2Ms9E1Q2RQO77Pb9BKunMBFS9RrhywxcVQ8GYzI1YY-P4yaLLfKPOm5fuRFwEa3eGcBMKqWw06w%3D%3D&attredirects=0The kickstarter project which looks at integrating the Kinect with the Lego Mind Storm to create a robot that can be navigated using human gestures. A group of six students from different programs (undergrad, postgrad, computer science, games and design) joined forces with some staff supervision and are currently on their way to demo their creation at the RMIT Open Day on the 12th of August 2012. Come visit us!! You can also follow the project blog, which also contains a Video Demo.




PTOR

Want to build a really cool system using the Oculus Rift? This project is about using either Unity 3D or Unreal to build a virtual audience in a seminar/presentation room, that will enable the user to practice a talk. Public speaking is often a major problem for a lot of people.


The idea is simple but there is room for plenty of advanced research to be done here. For example, we can use the speech patterns of the speaker to put people to sleep, yawn or use their mobile phones in the room hinting to the user that the speech is boring perhaps. We could hook it up to a microsoft kinect to provide feedback on body gestures as well. So plenty to do here.

See the first step here.

Kinect 3D

The aim of this project is to use the 3D point cloud data obtained from a Kinect device in real-time to be rendered into 3D objects within the Unity framework. We have done some preliminary work in this space, but there is more to be done. In particular, evaluating some of the algorithms that we have developed. click here for more about the project including a demo video.

Angry Birds Challenge

From the AI 2012 page - "The task of this Challenge is to develop an Angry Birds playing agent that is able to successfully play the game autonomously and without human intervention. This may require analysing the structure of the objects and to infer how to shoot the birds in order to destroy the pigs and to score most points. In order to successfully solve this challenge, participants can benefit from combining different areas of AI such as computer vision, knowledge representation and reasoning, planning, and machine learning. Successfully integrating methods from these different areas is one of the great challenges of AI. Click here for a video of the project

Unreal Bots

The project is to create automated unreal bots using Intelligent Agent technology. There are already research groups working on such bots and we want to create our own. The idea is to participate in international tournaments.

There are many research angles to this project in particular around agent oriented software engineering. We are interested in a methodology for creating such intelligent game bots and techniques for testing intelligent systems. The project will involve setting up the game infrastructure as a first task and then working closely with staff to develop the bots.

Click here for a demo day video



TurtleBot FR (Facial Recognition)

The goal of this project is for the turtlebot to wander around, waiting for someone to acknowledge its existence via a simple hand wave. Once the robot see this, it will attempt to recognise the person and say hello. If the person is new to the turtlebot, it will add their face to its database and remember it for next time they meet.


Click here for a blog of activity on this project including some cool videos.

MuZ Controller

In this project we develop a gesture-based sound producer application using the microsoft kinect. The basic idea is to recognize gestures to control loops that are placed on the screen in the form of circles and a soundpad. When a hand moves into a sound circle, the loop plays that sound, and the depth of the hand controls the volume. The soundpad can be used to control the pitch of the sound. Check out this page for an awesome video.