About me
I’m a research director and lead of the Controls team at Google DeepMind. From 2002 to 2015, I was a university professor for machine learning and neuroinformatics. My core scientific interest are intelligent machines, that are able to autonomously learn new things from scratch. I’m in particular interested in neural networks - a sort of mathematical model of the brain - and their ability to store and generalize information. This fascination goes back to my master thesis about supervised learning algorithms (‘Rprop’, 1992).
I have always been working at the boundary of new machine learning methods and their application to novel challenges: neural forecasting systems for financial trading and sales rate prediction (‘George’, 1994; ‘Bild-Zeitung’, 1998 - 2008), self-learning agents that control self-driving cars (at Stanford, 2006) or reading thoughts and even controlling brain activity ('BrainLinks BrainTools', 2011 - 2019). Brainstormers, our robotic soccer team, was a 5 times winner of the RoboCup World Championship and one of the first teams to use reinforcement learning (RL) as their core method (1998 - 2008). The data-efficient reinforcement learning algorithms Neural Fitted Q Iteration (NFQ, 2005; NFQCA, 2011) and Deep Fitted Q (DFQ, 2010) laid the ground for many methods in current Artificial Intelligence (AI) research.
I have followed my interests in various roles: as a game programmer and author for the ZX81 and ZX Spectrum (1981-1986), a computer science professor at the universities of Dortmund, Osnabrück and Freiburg (2002-2015), a Co-Founder of one of the first startups in modern AI (Cognit - Lab for learning machines, 2010-2015). In 2015, I joined DeepMind as a research scientist and team lead of the Controls Team.
"Future computer programs will contain a growing part of 'intelligent' software modules that are not conventionally programmed but that are learned either from data provided by the user, or from data that the program autonomously collects during its use." (January 2000)