Notes

Automated GUI performance testing (http://www.springerlink.com/content/e6v54g451m45jk24/)

I found this paper pretty interesting.

1. It surveyed 50 papers on GUI testing (most of them (44/50) are model-based approaches, as I found yesterday). They showed that current GUI testing approaches are not suitable for GUI performance testing:

a. First, many existing GUI test automation approaches and tools primarily focus on functional testing and thus do not need to support the capturing and replaying of realistically long interactive sessions. However, for performance testing, the use of realistic interaction sequences is essential.

b. These tools may significantly perturb the application’s performance. I think this is very important. If we are going to use some tool, such as selenium (http://seleniumhq.org/) for our automated testing, we need to study the percentage of its perturbation. We may have a section on the paper to show the percentage of the perturbation imposed by selenium, if we decide to use it for automation.

2. They evaluate capture and replay approaches as implemented in a set of open-source Java GUI testing tools. They study the practicality of replaying complete interactive sessions of real-world rich-client applications, and we quantify the perturbation caused by the different tools. They found that pounder(http://pounder.sourceforge.net/) is the best tool for conducting automated performance testing on Java GUI applications

3. They consider Capture and Replay as an appropriate approach for automated performance testing. The reason is that:

a. Out of the 50 related papers, 44 represent model-based approaches. The use of model simplies the automatic generation of event sequences (instantiation). Thus, model-based approaches allow the generation of an arbitrary number of sequences of arbitrary lengths. However, the surveyed model-based approaches did not primarily focus on producing long and realistic sequences. Their main goal is to increase test coverage, according to various coverage criteria. Many approaches aim at achieving high coverage while keeping sequences as short as possible, especially given that exploring the model using longer sequences can lead to a combinatorial explosion of the number of possible sequences. Five approaches include usage frequencies in their models. While such information could be used to generate more realistic event sequences, this is not the focus of those papers. Current work on GUI test automation does not explicitly focus on exercising applications using realistic, long event sequences. As a consequence, evaluations of that work focus on other aspects, such as the reduction in testing time and the improvement of coverage. In this paper, we fix this gap: we evaluate GUI test automation approaches, in particular the more mature category of capture and replay tools, for their ability to handle real-world usage sessions.

4. The approach proposed in this paper is based on capture and replay, which is different from our approach. It also summaries the current approaches used in GUI testing in Figure 2. The automated approach we discussed yesterday can be expressed using their terms:

a. First randomly click a button or trigger an event (or systematically find a button to click)

b. Then abstract from the execution to build a model

c. Update the model by getting feedback from the execution

d. Generate tests based on the model by introducing performance metrics