Self-adaptive and self-organizing systems are, by nature, systems that operate and adapt under dynamic circumstances. This can be due to the environment in which they are embedded changing, or the problem to be addressed by the system changes, or the devices on which they operate has varying behaviour (the device fails, or communication capabilities fluctuate etc.).

Any scientific study of a software solution should include a thorough evaluation. The last decade has seen many self-adaptive and self-organizing systems proposed and studied. However, while the literature shows an element of evaluation of the proposed algorithms or systems, they also highlight that a fair comparison between algorithms is a very difficult and unsolved problem.

Evaluating solutions to dynamic problems is particularly complicated. The challenges range from aspects such as:
  • solutions to dynamic problems need to take into account various (and sometimes conflicting) objectives, including timeliness of adaptation, overheads  (for computation and communication), tolerance of disruption , etc.
  • approaches for on-line algorithms, such as K-competitive analysis, may not be suitable since these require a notion of an optimal solution - a notion hard to define for dynamic solution techniques
  • comparing self-organizing solutions to their static counterpart is not always a fair comparison. Comparing a distributed or decentralized solution, which needs to account for extra communication to allow the system to scale with an off-line algorithm, is unfair.
  • comparing adaptive (i.e. self-adaptive or self-organizing) solutions is hard, because they have been driven with non-functional requirements and requirements such as reliability, stability  or system  lifetime may be more important than performance efficiency
Additionally, but related to these challenges, there are relatively few (and in some cases no) benchmark suites or codes for dynamic scenarios to work with.

In short, disciplined approaches to allow us to reason and study the qualities of SASO systems are required.

This workshop aims to bring together a variety of researchers in the SASO, autonomic computing and cyber physical systems areas, to discuss these topics. The workshop will solicit experience reports, theoretical work, position statements, and other research contributions.

Topics of Interest
Contributions are welcomed as
  • Experience reports
  • Position papers
  • Theoretical work
  • Other research contribution
on topics that include - but are not limited to:
  • Techniques, models and theories on defining evaluation criteria for SASO systems
  • Models on - and examples of - dynamic benchmark problems for SASO systems
  • Experience reports on evaluation practice in SASO systems
  • Methodologies for disciplined development of SASO systems


The workshop will mainly be a forum for group discussion rather than a mini-conference. Depending on the submissions, the discussions may be steered by studying one or a limited number of concrete cases. Short presentations will pitch the main ideas of the contributors.

As a concrete deliverable of the workshop, the organizers will consider a collaborative effort in writing a survey and research agenda paper, describing a roadmap towards evaluating SASO systems.

Important Deadlines

Abstract submission:                                          July 1, 2012
Full paper submission:                                        July 4, 2012
Notification of acceptance:                                  July 25, 2012
Camera ready version of accepted papers:          August 24, 2012
Early registration:                                               August 20, 2012