AuRUS

On this webpage, you will discover a set of guidelines that outline the process for replicating the experiments conducted in the research paper titled "Automated Repair of Unrealisable LTL Specifications Guided by Model Counting."  The paper was presented at the Genetic and Evolutionary Computation Conference (GECCO) in 2023. By following these instructions, you can recreate the same experiments and obtain results that can be compared to those in the paper. The guidelines provided here will walk you through the necessary steps required to set up the experiment environment, execute the experiments, and analyze the results. 

You can find the paper in the following link: AuRUS (here's the pdf).

Installation Instructions

REQUIREMENTS

DOWNLOAD

You can download the tool from this link: unreal-repair.zip. Or if you prefer you can fork the repository :) 

INSTALL

INSTALL DOCKER IMAGE

Follow these instructions to install the Docker image with the Strix installation.

Running the Experiments

In the folder case-studies you can find the scripts with the specifications of each one of the case studies used in the paper.  

RUN THE TOOL

The tool takes as input a specification in TLSF format.  

To run our tool you have to use the script unreal-repair.sh. 

For example, to run our motivating example Arbiter: 

./unreal-repair.sh case-studies/arbiter/arbiter.tlsf 

In our experimental evaluation we run the following command:

./unreal-repair.sh -Max=1000 -Gen=1000 -Pop=100 -k=20 -GATO=7200 -addA -out=result/arbiter/ case-studies/arbiter/arbiter.tlsf 

In the example we indicate to the tool that 1000 in the maximum individuals to be generated, 100 is the population size per iteration, and assumptions can be added. The tool will save the realisable repairs in the directory that you indicate using -out flag, or by default in the same directory where is the input specification.

We can indicate to the tool what are (genuine) solutions of reference, that will be used at the end of analysis to study the quality of the learnt repairs,

by adding -ref=case-studies/arbiter/genuine/arbiter_fixed0.tlsf -ref=case-studies/arbiter/genuine/arbiter_fixed1.tlsf -ref=case-studies/arbiter/genuine/arbiter_fixed2.tlsf -ref=case-studies/arbiter/genuine/arbiter_fixed3.tlsf

REPRODUCE THE EXPERIMENTS

In order to reproduce the experimental evaluation of the paper, we provide several scripts:

Reading the Results

You can download the results of our experimental evaluation from this link: Results.zip

We ran the algorithm 10 times for each case study. All the results that we obtained are reported in the result section.

You can use script read-results.sh to extract a summary of results for each run.

Example:

./read-results.sh result/result-70-10-20/arbiter/arbiter-genuine

This command will show the results obtained for the Arbiter case study in the 10 runs with a configuration in which .7 was assigned to the realisability property, 0.1 to the syntactic distance, and .2 to the semantic distance.

 

Flags to Configure the GA

Our tool offers many flags to run the Genetic Algorithm under different configurations.

Use ./unreal-repair.sh [flags] input-file.tlsf

Flags: