Learning in Games
This line of research is about "tools for designing games".
All stuff is available for free for non-profit research and teaching (Windows tested but some of it also Unix-tested, and there are variants for other platforms too - Android & iOS). Also, an overview of all the technologies which have been used over these years in a variety of undergraduate and postgraduate dissertations is available (in Greek) here: http://snf-858823.vm.okeanos.grnet.gr/rlgame/.
It started out to explore the use of reinforcement learning and neural networks to evolve computer players that can play effectively a (new) board game against humans. That was inspired mainly by IBM's Deep* machines and Tesauro's Neurogammon and TD-Gammon. We then moved to investigate how to best utilize expert involvement in terms of humans playing against the computer so that the algorithms can then efficiently and effectively develop defensive and offensive strategies. We have also experimented a bit with using minimax as a tutor.
Initial work on the subject, a new board game for testing, and the development of the underlying technology and playing mechanisms (including self-play and human-vs-computer) are described in two early papers:
D. Kalles and E. Ntoutsi. “Interactive Verification of Game Design and Playing Strategies”, IEEE International Conference on Tools with Artificial Intelligence, 2002.
D. Kalles and P. Kanellopoulos. “On Verifying Game Designs and Playing Strategies using Reinforcement Learning”, ACM Symposium on Applied Computing, special track on Artificial Intelligence and Computation Logic, 2001.
Since a human can be considered as an expert player, we then experimented with how we can use such expert knowledge in a cost-effective manner, also trying to quantify what it means to play well (win a lot of games in a few moves). Papers on this issue are:
D. Kalles. “Player Co-modeling in a Strategy Board Game: Discovering how to Play Fast”, Cybernetics and Systems, 2008.
D. Kalles, and Ch.Kalantzis. “Evolving Computer Game Playing via Human-Computer Interaction: Machine Learning Tools in the Knowledge Engineering Life-Cycle”, Joint Conference on Knowledge-Based Software Engineering, 2008.
D. Kalles. “Measuring Expert Impact on Learning how to Play a Board Game”, IFIP Conference on Artificial Intelligence Applications and Innovations, 2007.
D. Kalles, and I. Fykouras. “Time does not always Buy Quality in Co-evolutionary Learning”, Panhellenic Conference on Artificial Intelligence, 2010.
D. Kalles, and I. Fykouras. “Examples as Interaction: On Humans Teaching a Computer to Play a Game”, International Journal on Artificial Intelligence Tools, 2010.
We have also experimented with using minimax as an expert player. It seems that the opponent of minimax seems to best make use of the experience! We use the term "pendulum effect" to term this observation.
D. Kalles and P. Kanellopoulos. “A Pendulum Effect of Expert Playing in Games”, IEEE International Conference on Tools with Artificial Intelligence - ICTAI2014, Limassol, Cyprus, November 2014.
D. Kalles and P. Kanellopoulos. “A Pendulum Effect in Co-evolutionary Learning in Games”, European Workshop in Reinforcement Learning, 2011.
D. Kalles and P. Kanellopoulos. “A Minimax Tutor for Learning to Play a Board Game”, Workshop on Artificial Intelligence in Games, a workshop of the 18th European Conference on Artificial Intelligence, 2008.
Questions we are looking into:
Is it more important to leave playing policy evolution to good algorithms or is it better to develop an experimentation environment for streamlining expert involvement?
Can an agent self-plan its learning curriculum, i.e. decide on which opponents to play, in order to maximise its learning?
Since we believe that environments to support the streamlining of social interactions between diverse player populations, from novices to experts, will be key to let us explore how one can set one's own learning path, we have invested some effort along that direction too.
Some related papers are:
K. Giagtzoglou, D. Kalles. 'A gaming ecosystem as a tool for research and education in Artificial Intelligence'', Panhellenic Conference on Artificial Intelligence, Rio, Greece, 2018.
Ch. Kiourt, D. Kalles and G. Pavlidis. “Rating the Skill of Synthetic Agents in Competitive Multi-Agent Environments”, Knowledge and Information Systems, 2018.
Ch. Kiourt, D. Kalles and P. Kanellopoulos . “How game complexity affects the playing behavior of synthetic agents”, 5th European Conference on Multi-Agent Systems, Évry, France, December, 2017.
Ch. Kiourt and D. Kalles. “Synthetic Learning Agents in Game Playing Social Environments”, Adaptive Behavior, Vol. 24, No. 6, pp. 411 – 427, 2016.
Ch. Kiourt and D. Kalles. “A Platform for Large-Scale Game-Playing Multi-Agent Systems on a High Performance Computing Infrastructure”, Multiagent and Grid Systems, Vol. 12, pp. 35 – 54, 2016.
Ch. Kiourt and D. Kalles. “Using Opponent Models to Train Inexperienced Synthetic Agents in Social Environments”, IEEE Conference on Computational Intelligence and Games Conference on Multi-Agent Systems, Santorini, Greece, September, 2016.
Ch. Kiourt, G. Pavlidis and D. Kalles. “ReSkill: Relative Skill-Level Calculation System”, Panhellenic Conference on Artificial Intelligence, Thessaloniki, Greece, 2016.
Ch. Kiourt and D. Kalles. “Learning in Multi Agent Social Environments with Opponent Models”, European Conference on Multi-Agent Systems, Athens, Greece, December, 2015.
Ch. Kiourt, D. Kalles and G. Pavlidis. “Human Rating Methods on Multi-Agent Systems”, European Conference on Multi-Agent Systems, Athens, Greece, December, 2015.
Ch. Kiourt and D. Kalles. “A Distributed Multi Agents Based Platform for High Performance Computing Infrastructures”, Workshop Parallel and Distributed Computing for Knowledge Discovery in Data Bases, a workshop of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery, Porto, Portugal, September, 2015.
Ch. Kiourt and D. Kalles. “Development of Grid-based Multi Agent Systems for Social Learning”, IEEE International Conference on Information, Intelligence, Systems and Applications, Corfu, Greece, July 2015.
N. Dikaros, D. Kalles. ''Developing a Game Server for Humans and Bots'', Panhellenic Conference on Artificial Intelligence, Ioannina, Greece, 2014.
A. Georgas, D. Kalles and V. Tatsis. ”Scientific Workflows for Game Analytics”, Encyclopedia of Business Analytics and Optimization, J. Wang (ed), IGI Global, pp. 2115-2125, 2014.
Ch. Kiourt and D. Kalles. “Building A Social Multi-Agent System SimulationManagement Toolbox”, Balkan Conference on Informatics, Thessaloniki, Greece, September 2013.
Ch. Kiourt and D. Kalles. “Social ReinforcementLearning in Game Playing”, IEEE International Conference on Tools with Artificial Intelligence, Athens, Greece, November 2012.
Interesting tangents evolved alongside the development of more ML for new games and genetically engineered neural networks for Othello:
A. Nikolakakis and D. Kalles. “Neural Networks as a Learning Component for Designing Board Games”, 18th International Conference on Engineering Applications of Neural Networks (EANN 2017), Athens, Greece, August 2017.
V. Makris and D. Kalles. “Evolving Multi-Layer Neural Networks for Othello”, Panhellenic Conference on Artificial Intelligence, Thessaloniki, Greece, 2016.
Interested? Contact me. (I have not updated this page since 2018.)