microRTS is a small implementation of an RTS game. It was created to experiment with game-tree search techniques for real-time games, and thus I made it deterministic and fully-observable. microRTS is coded in Java and comes with a collection of AI techniques already built-in (including real-time variants of alphabeta search, Monte Carlo search, UCT, etc.).
  • The complete source code can be found here: microRTS
  • You can see a video of a human playing against a Monte Carlo Tree Search AI here: youtube video 
  • You might also be interested in the microRTS competition


ρG is a Java library for manipulating and reasoning Directed Labeled Graphs (DLGs). Specifically, the focus of the library is on refinement operators: functions that, given a DLG generate other DLGs that are either more general or more specific. These refinement operators can be used to search over the space of graphs, for example to define machine learning algorithms, or similarity/distance measures between graphs.

Riu / SAM

Riu is an interactive narrative system which focuses on narrating the inner world of the main character and its relation with the external story world. Riu is a stand-alone engine, that can interpret different stories. For example, in the "Ales" story world, Ales is a robot, who has initially lost all of its memories. As the story progresses, Ales might recall some of those memories if he finds himself in a situation similar to one of his past memories.

In order to achieve the desired narrative effect, Riu uses computational analogy to implement the memory recall of Ales, as well as his imagination. When Ales is faced with a decision point (where the user can provide input), he will imagine the outcome of each of the options, and refuse to execute some of them if he does not like the outcome he imagines. This paper describes Riu in detail.

SAM is an analogy-based story generation algorithm, used by Riu, capable of completing a target story T by analogy from a source story S.
Internally, SAM uses the Structure Mapping Engine (SME) algorithm (by Falkenhaimer, Forbus and Gentner) to find mappings
between S and T

The source code of Riu can be found here Riu
The source code of a stand-alone version of SAM can be found here SAM

Note: SAM comes with three sample stories taken from the Ales story world system, for more stories, you can download Riu also from this page.

We are currently working on a new story world, called Evening Tide, which will be available as soon as it's completed.


FTL is a library to manipulate feature terms. Feature terms are a knowledge representation formalism, which is a different subset of first-order logic than description logics, and are very useful to represent relational data, specially for machine learning and case-based reasoning research. FTL provides the following functionality:
  • Represent terms (create them, and load and save them from disk)
  • Subsumption, unification and antiunification
  • Refinement operators
  • Graphical visualization of feature terms
  • In includes a small collection of machine learning techniques implemented using feature terms (algorithms such as ID3, FOIL, HYDRA, LID, INDIE, or similarity measures like RIBL are included).
FTL was inspired by the NOOS language by Josep Lluis Arcos and Enric Plaza.

The source code can be found here:

Any code in this page that is from 2013 or older will require an older version of FTL (which can be found here)

This library was developed in collaboration with Enric Plaza. I'd also like to help Carlos López del Toro, who took charge of refactoring the source code, and of creating a GUI that demoes most of the functionality of FTL (included in the source code)

Darmok 2

Darmok 2 is a case-based planning system designed to operate in complex, real-time domains for which perfect theories cannot be constructed. It has been specifically designed to work in real-time strategy (RTS) games. Darmok 2 is an evolution of the original Darmok system (which was designed to play Wargus). Darmok 2 integrates the following key ideas:
  • Interleaved planning and execution
  • Learning plans and cases from demonstration

Darmok 2 powers the social gaming website MakeMEPlayME, where players can train their own MEs (artificial intelligence bots) and then make them compete in a variety of games. Players create their MEs simply by demonstration (i.e.a player plays a game demonstrating a particular strategy, and the ME learns it automatically).

The source code can be found here:
And the games we have been using to test Darmok 2 can be found here:


NOVA is a software agent capable of playing Starcraft. Specifically, it plays the Terran race, and it is being built by my student Alberto Uriarte as part of his PhD thesis for the Starcraft AIIDE competition.

Experiments Source Code and Data

Here you can find the code and data required to reproduce the results published in some of my papers (I should do it for all the papers, but I'm lazy to update this, if you want the code to replicate any experiment, just let me know). It must be noted that the code included here is research code, and not designed to be readable. If you have troubles understanding the code, please contact me.

Concerning the datasets, I mostly use datasets from the UCI ML repository, but since some of my research uses feature terms as the representation formalisms, they are translated into this formalism (I do this automatically, so there should be no information loss or gain). But, just in case, and in order to be clear on how do the datasets are when I use them, I also include links to the specific versions I used. My complete list of datasets can be found here

ICCBR 2015: Argument-based Case Revision in CBR for Story Generation
Applications of Constraint Satisfaction to Feature Terms
  • Source code to run the experiments from our ILP 2011 and CP 2012 papers on CSPs and feature terms. They require the FTL library (see above)
  • datasets

A-MAIL experimentation code
  • Source code to run the experiments (A-MAIL itself is included in the fterm library).
  • datasets

ICML 2010: Multiagent Inductive Learning: an Argumentation-based Approach

ICCBR 2009: On Similarity Measures based on a Refinement Lattice
A large collection of my experiments use my own library for feature terms: fterm. Feature terms are a formalism to represent relational data which is very suitable for machine learning techniques. It is an alternative to description logics (their relation is explained in this paper). My library implements what is needed to represent data using feature terms, and the basic operations of: subsumption, unification, antiunification and refinement. In order to save/load terms to/from disk, my library uses the NOOS representation language. Feel free to use it, and contact me in case you need help or you have problems using it.