Yann-Michaël De Hauwere graduated magna cum laude in 2006 from the University of Brussels where he studied Computer Science . He finished his PhD, entitled “Sparse Interactions in Multi-Agent Reinforcement Learning” summa cum laude at the same university in 2011. His research interest include scaling Reinforcement Learning to settings where multiple agents are acting together, using only limited communication. This include, transferring inherently multi-agent knowledge between the different agents in a system. Next to this, he is also interested in applying MARL approaches to multiple heterogeneous robots. Currently he is also a part time lecturer at the University of Brussels where he teaches a course on Declarative Programming. Key publications for this tutorial include [1, 2, 7].
Daniel Hennes graduated summa cum laude from Maastricht University with a B.Sc. in Knowledge Engineering and Computer Science (August 2007) and a M.Sc. in Artificial Intelligence (June 2008). In July 2008 he started his Ph.D. research at Eindhoven University of Technology; as of September 2009, he is continuing his research at Maastricht University. From September 2007 - January 2008 and February 2010 - July 2010, he worked at Oregon State University as a short term research scholar in the area of Dynamics and Control. From September 2010 - December 2010 he was a Research Intern at Willow Garage, a robotics research lab and technology incubator in Menlo Park, CA. Current research interests are learning dynamics of autonomous and adaptive multi-agent systems in general and multi-agent reinforcement learning and the relation to evolutionary game theory in particular. Key publications for this tutorial include [3, 4].
Michael Kaisers graduated from Maastricht with a B. Sc. in Knowledge Engineering in 2007 on “Reinforcement Learning in Multi-agent Games”, and a M. Sc. in Artificial Intelligence in 2008 on “Games and Learning in Auctions”. In both cases, he earned the honor summa cum laude, additionally abbreviating the three-years bachelor program to two years, and complementing his master program by an extra-curricular four-month research visit to Simon Parsons at Brooklyn College, New York City. In a nationwide competition, the Netherlands Organization for Scientific Research (NWO) awarded him a TopTalent 2008 PhD grant for his proposal “Multi-agent Learning in Auctions”. The findings of his PhD research have extended and solidified the link between evolutionary game theory and reinforcement learning [6], in particular considering variations of Q-learning [5]. He intensified his international research network through a three-month research visit to Michael Littman at Rutgers, State University of New Jersey, and gave presentations at various workshops and conferences.
Ann Nowé graduated from the University of Ghent in 1987, where she studied mathematics with optional courses in computer science. Then she became a research assistant at the University of Brussels where she finished her PhD in 1994 in collaboration with Queen Mary and Westfield College, University of London. Currently, she is a professor at the Vrije Universiteit Brussel both in the Computer Science Department of the Faculty of Sciences as in the Computer Science group of the Engineering Faculty. Her research interests include Multi-Agent Learning (MAL) and Reinforcement Learning (RL). Within MAL, she focusses on the coordination of agents with limited communication, social agents learning fair policies and the relationship between Learning Automata and Evolutionary Game Theory. Within RL she primarily looks at conditions for convergence to optimality, the relationship with Dynamic Programming, and the application to non-stationary problems and distributed multi-agent systems. Key publications for this tutorial include [7, 8].
Karl Tuyls works as an Associate Professor in Artificial Intelligence at the department of Knowledge Engineering, Maastricht University (The Netherlands) where he leads a research group on swarm robotics and learning in multi-agent systems (Maastricht Swarmlab). Previously, he held positions at the Vrije Universiteit Brussel (Belgium), Hasselt University (Belgium) and Eindhoven University of Technology (The Netherlands). His main research interests lie at the intersection of Reinforcement Learning, Multi-Agent or Robot Systems and (Evolutionary) Game Theory. He was a (co)-organizer of several events on this topic like the European workshop on Multi-agent systems (EUMAS’05), the Belgian-Dutch conference on AI (BNAIC’05 and ’09), and workshops on adaptive and learning agents (EGTMAS’03, LAMAS’05, ALAMAS’07, ALAg & ALAMAS’08, ALA’09). In 2000 he has been awarded the Information Technology prize in Belgium and in 2007 he was elected best junior researcher (TOPDOG) of the faculty of humanities and sciences, Maastricht University, the Netherlands. Tuyls is associate editor of two journals and has published in absolute top journals in his research area such as Artificial Intelligence, Journal of Artificial Intelligence Research, Theoretical Biology, Autonomous Agents and Multi-Agent Systems, Journal of Machine Learning Research etc.. Key publications for this tutorial include [8, 9].
[1] De Hauwere, Y.-M., Vrancx, P., and Nowé, A. Learning multi-agent state space representations. In the 9th International Conference on Autonomous Agents and Multiagent Systems (Toronto, Canada, 2010), pp. 715–722.
[2] De Hauwere, Y.-M., Vrancx, P., and Nowé, A. Solving delayed coordination problems in mas (extended abstract). In the 10th International Conference on Autonomous Agents and Multiagent Systems (Taipei, Taiwan, 2011), pp. 1115–1116.
[3] Hennes, D., Tuyls, K., and Rauterberg, M. Formalizing multi-state learning dynamics. In IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT’08 (2008), vol. 2, pp. 266–272.
[4] Hennes, D., Tuyls, K., and Rauterberg, M. State-coupled replicator dynamics. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (2009), International Foundation for Autonomous Agents and Multiagent Systems, pp. 789–796.
[5] Kaisers, M., and Tuyls, K. Frequency adjusted multi-agent Q-learning. In Proc. of 9th Intl. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010) (May, 10-14, 2010), van der Hoek, Kamina, Lespérance, Luck, and Sen, Eds., pp. 309–315.
[6] Kaisers, M., and Tuyls, K. Replicator dynamics for multi-agent learning: An orthogonal approach. In Adaptive and Learning Agents (ALA) 2009 (2010), M. E. Taylor and K. Tuyls, Eds., LNCS 5924, Springer, pp. 49–59.
[7] Nowé, A., Vrancx, P., and De Hauwere, Y.-M. Game theory and multi-agent reinforcement learning. In Reinforcement Learning: State-of-the-Art (2011), M. Wiering and M. van Otterlo, Eds., Springer, p. to appear.
[8] Tuyls, K., and Nowé, A. Evolutionary game theory and multi-agent reinforcement learning. Knowl. Eng. Rev. 20, 1 (2005), 63–90.
[9] Tuyls, K., and Parsons, S. What evolutionary game theory tells us about multiagent learning. Artificial Intelligence 171, 7 (2007), 406–416.