I am a researcher at Google DeepMind in machine learning and robotics. Previously, I have been a postdoctoral researcher and PhD student at the Oxford Robotics Institute as well as a member of Oxford University’s New College. Prior visiting scholar positions include UC Berkeley (Prof Pieter Abbeel's group), ETH Zurich (Prof Jonas Buchli's group), and MIT (Prof Karl Iagnemma's group).
My collaborator's and my work has received multiple awards, including Best Student Paper at IROS 2016 and Best Conference Paper at GVSETS 2012. I have been involved in conference, workshop, and seminar organisation and teaching over last last decade including outreach chair at CoRL 2022, six iterations of the robot learning workshop at NeurIPS and summer school seminars at UZH 2022 on advances in transfer learning for RL.
My current research focuses on
Sequential Decision Making: Reinforcement & Imitation Learning: many exciting problems can be modeled as SDM including robot control and even the optimisation of LLMs/VLMs. I have spend quite a bit of time working on and thinking about abstractions, HRL, IRL, and related ideas (including their application at planetary 🌍 scale).
Large-Scale Transfer and Lifelong Learning: nature does not forget, so should our agents. Transfer underlies some of the most exciting agent capabilities as foundation models grow increasingly powerful and it becomes infeasible to regenerate all existing knowledge in individual experiments. We wrote about related ideas in our recent survey.
Robotics and Real-World Control: I'm passionate about algorithms leaving their digital world and affecting our physical one. This includes questions about sim2real for robotics, complex control systems, just fun robots, and rethinking our engineering workflows.
Simpler Machine Learning Algorithms: Simplicity is key for scientific progress, simpler methods accelerate understanding, lower the barrier of entry to research, and help future researchers to expand prior work (here are some related works).