Home

I am currently a research scientist at DeepMind, where I work on applied robotics. Our team's mission is to help bring the power of deep learning for control to the physical world, and in particular robot manipulation.I was previously a PhD student at Georgia Tech, and was co-advised by Mike Stilman and Charles Isbell. My thesis was called "Physics-Based Reinforcement Learning for Autonomous Manipulation", and broadly-speaking was about integrating physics-based models and planning representations from robotics into the Reinforcement Learning framework.

I got into this field because I've always found intelligence fascinating, and my scientific comfort-zone requires building things in order to feel like I truly understand them. Fittingly, I would make the same statement for my robots: they understand the physical world only to the extent that they can build things (literally: I work on assembly these days). Methodologically I am committed to reinforcement learning, but maintain my belief that higher intelligence will require learning rich, compositional models of the world. I think robots should (and will!) be capable of learning sophisticated and transferable models of the world for themselves, and that this problem lies at the nexus of research in machine learning, cognitive science, and robotics.

Contact

Jonathan Scholz

6 Pancras Square

N1C 4AG

London, UK

jscholz@google.com

Please contact joinus@deepmind.com if you would like to apply to DeepMind, deepmind-press@google.com if you have press requests.