Tackling Combinatorial Explosion: A Study of Industrial Needs and Practices for Analyzing Highly Configurable Systems

Online Appendix

Mukelabai Mukelabai,* Damir Nesic,** Salome Maro,* Thorsten Berger,* and Jan-Philipp Steghöfer*

*Chalmers and University of Gothenburg, Sweden

**KTH Stockholm, Sweden

This online appendix is for a study conducted to investigate industrial practices and needs for analyzing highly configurable systems. The study relies on a survey with 27 practitioners engineering highly configurable systems and follow-up interviews with 15 of them, covering 18 different companies from eight countries. The design of the study is guided by classifying existing analyses, identifying the properties they analyze, and then systematically eliciting practitioners' needs and practices.

The study finds that reliability is among the most critical properties, that consistency between variability models and artifacts is critical, but that the majority of other variability-model analyses are not perceived as needed. It identifies rather pragmatic analysis strategies, including practices (e.g., modularization) to avoid the need for sophisticated analysis.

Testing with experience-based sampling is the most commonly applied strategy and systematic sampling techniques are rarely applicable.

The results also indicate the need for effective change-impact analyses and applicable static-analysis tools, and that analyses that focus on variability-models are not perceived as useful and applicable in industry.

NOTE:

  • In our paper, we state that 100% of our survey respondents use testing to assure general system properties, such as reliability and performance. This statement is based on responses to question 17 (How do you assure [general system] properties?) in the survey report. However, the reader may notice that survey question 19 (Do you test your configurable system?) indicates that 92% said yes, while 8% said no. This question was specifically added as a precursor to questions about sampling strategies and was only made visible to catch respondents who may have mistakenly forgotten to select "testing" under question 17. Hence, this is not a contradiction, as our statement in the paper refers to how general system properties are assured, and not to whether all our respondents perform testing or not.
  • The reader may observe inconsistencies in the numbering of the questions between the survey report and questionnaire; this is because some questions had extra comment fields eliciting for elaborations. These extra comments are not numbered in the questionnaire, but the reporting system assigned them numbers; for instance, question 18 in the report appears in the questionnaire as an unnumbered extra comment question under question 17 that reads as: "Can you give us some concrete examples of functional or non-functional (quality) properties from your system that you find nice to have or critical?"