ReSkill: Relative Skill Calculation System
A free tool, for rating game agents' relative skill level in competitive multi-agent environments.
The application launches in pop up windows and you have to enable them in your browser.
Due to the #NPAPI plugins issue, the application works only in the Mozilla browser.
We are happy to consider requests for a stand alone executable.
Please also update your Java Security Exceptions, as shown in the next figure and accept the Java Safety Security notifications. These configurations are now a prerequisite for Java 7 and up, according to the guidelines (Oracle-Java link).
In order to contribute towards the evaluation of the usage of human performance rating systems in Multi-Agent Systems we have implemented the most well-known rating systems into an integrated web-based system that is open to the scientific community for extensive testing and evaluation of agents’ performance. The entire system is built in JAVA based on the input–process–output (IPO) model . The main operation of the systems is based on the idea of the Work Flow system model. As shown in Figure 1, the first step of the process is to read and analyze the input file. The next step includes processing of the players’/agents’ data and rating to provide performance ratings. Finally, the system produces an output file for additional data analysis along with various performance curves.
The input files contain the players/agents, their initial ratings (could be set to default values) and the results of all the matches between them in any competitive matching scenario, as shown in Figure 2 (a) for two different players/agents. Some generic additional information are also optionally included in the input files. It should be highlighted that the system supports two different input file structures:
A web based Graphical User Interface (GUI) was developed to provide an integrated environment for further testing and experimenting with player’s/agents’ rating. Figure 3shows the web-based GUI of the application. For convenience the GUI is divided into two different panels, the first (upper) being the panel of the files and data and the second (lower) being the panel of the results.
The first panel, the user chooses either to upload the input data or manually generate the data through the GUI. The second option is shown in Figure 3 (a) and it is limited on the creation of LIDS-type files. After uploading or generating the data, the “Rating” option provides the computation of the player’s/agent’s ratings. At the end, the “Export” option stores the results in a CSV text file; in case the input data were manually created then an input file is also created and stored.
The second panel of the system provides several tab panels with various functionalities:
The output file has similar structure with small changes and some additional columns. Specifically, it consists of nine (9) columns where two new columns are added, as in next Figur. The first column shows the agent under study whereas the last column is an incremental number that counts the matches. All the other columns are updated versions of the corresponding columns from the input files. Both input and output files are CSV structured for maximal interoperability and interconnectivity with external data analysis tools (and acceptable readability and editability by humans, which other data interchange techniques, like JSON, lack).
We also provide a huge experimental dataset (tab delimited text files) that was used for the evaluation of human rating methods (Elo and Glicko) applied in multi-agent systems, as well as for the evaluation of a new introduced rating methods (RSLE). This dataset was used in the experiments of the paper "Kiourt, C., Kalles, D. and Pavlidis, G.: “Rating Synthetic Agents' Skill in Competitive Multi Agent Environments”, International Journal of Knowledge and Information Systems, 2017, (Under review)".
The next figure presents an example of the provided dataset. There are 3 pairs of columns, each pair presents the opponents and the results of the games played by an agent, whose the name is always presented in the first row of the second column of each column pair. For example, the opponents of the agent Agt_11 (first row, column A2) are presented in the column A1, while the name of winner of each game is presented in the column A2 (except the first row). The winner of the second game (Game 2 row) of agentAgt_11 (Agt_11 vs Agt_21), was the opponent (Agt_21). This is how all our datasets are structured.
If 10 agents participated in a Round Robin tournament with 10 games per match, then the dataset file of the experiment, will have 20 columns and 91 rows.
We mainly work in social dynamics in multi-agent environments with synthetic agents is an effective way to simulate real-life conditions. Nowadays there is a trend towards the integration of social dynamics in multi-agent virtual environments to better assess the performance of synthetic agents in competitive situations. This work maily carried out using human rating methods, such as Elo and Glicko, two of the most widespread methods, primarily used for chess.
ReSkill introduces a web-based system that was developed to provide a way for everyone to be able to use these well-known human rating systems in various multi-agent rating experiments.
Chairi Kiourt is a Phd Candidate at the Hellenic Open University, School of Science and Technology. He received a Bachelor degree in electrical engineering from the Technical Education Institute of Kavala, Greece. He also received an M.Sc. in Systems Engineering and Management from the University of Thrace, Greece. His research is on large scale agent socialization experiments, using game playing and machine learning.
Dimitris Kalles is an Assistant Professor on Artificial Intelligence with the Hellenic Open University, School of Science and Technology. He holds a PhD in Computation from UMIST (Manchester, UK) and is actively researching in artificial intelligence, software engineering and e-learning. He has taught several courses and supervised numerous dissertations and theses, both undergraduate and postgraduate, at the Hellenic Open University and at other universities. He has held positions in research and in industry and has served as expert evaluator for the European Commission, for the Corallia Clusters Initiative and for several other organizations. He has served as Chairperson and Secretary General of the Hellenic Society for Artificial Intelligence.
George Pavlidis received his PhD in Electrical Engineering, earning the distinction of the Ericsson Awards of Excellence in Telecommunications. He has been working for numerous R&D projects with emphasis on multimedia systems in culture and education. In 2002 he joined the ‘Athena’ Research Center, where he is now a research director, head of the Multimedia Research Group and head of research at ‘Clepsydra’ Cultural Heritage Digitization Center. His research interests involve 2D/3D imaging, CBIR, multimedia technologies, human-computer interaction, intelligent user interfaces, multi-sensory environments and ambient intelligence, 3D digitization and reconstruction, 3D-GIS and mixed/augmented/virtual reality. Dr. Pavlidis is a member of the Technical Chamber of Greece, of the Hellenic Researchers' Association, a senior member of the IEEE, and a founding member of the ‘Athena’ Research Center’s Researchers’ Association.