Other Android Testing Researches

Capture & Replay vs Automated Testing for Android apps

A study has been carried out to evaluate and compare the effectiveness of different GUI testing approaches in the context of Android apps, in order to better understand the effectiveness of Capture and Replay (C&R) test-generation techniques in comparison with Automated Input Generation (AIG) techniques.

To this aim, 20 Computer Engineering students from the University of Naples “Federico II”.were enrolled in the Advanced Software Engineering course, held by one of the authors, for the Master degree in Computer Engineering.

The students, having university-level programming and software engineering skills, during the course received several lectures on testing techniques, testing automation, GUI testing, and C&R techniques. In addition, they received several lessons on Android programming (at the end of the course they had to develop an Android app) and were all Android phone users.

Two tasks were requested to each of them and for each of the four Android apps considered in the experiment:

UET : In the first task, each student had to produce test cases exploring the application under test while using the Capture and Replay tool called Robotium. The students had no previous knowledge of the applications under test. The Robotium tool automatically generates Android JUnit test cases corresponding to the recorded interactions.

IET : In the second task, the same students had to design further test cases with the aim to try to obtain the maximum obtainable coverage of the source code of the applications under test. They have the source code of the applications under test and the coverage achieved by the previously recorded tests.


Comparing the effectiveness of capture and replay against automatic input generation for Android graphical user interface testing

Di Martino, S., Fasolino, A.R., Starace, L.L.L., Tramontana, P.

Comparing the effectiveness of capture and replay against automatic input generation for Android graphical user interface testing (2020) Software Testing Verification and Reliability, . DOI: 10.1002/stvr.1754


Exploratory testing and fully automated testing tools represent two viable and cheap alternatives to traditional test‐case‐based approaches for graphical user interface (GUI) testing of Android apps. The former can be executed by capture and replay tools that directly translate execution scenarios registered by testers in test cases, without requiring preliminary test‐case design and advanced programming/testing skills. The latter tools are able to test Android GUIs without tester intervention. Even if these two strategies are widely employed, to the best of our knowledge, no empirical investigation has been performed to compare their performance and obtain useful insights for a project manager to establish an effective testing strategy. In this paper, we present two experiments we carried out to compare the effectiveness of exploratory testing approaches using a capture and replay tool (Robotium Recorder) against three freely available automatic testing tools (AndroidRipper, Sapienz, and Google Robo). The first experiment involved 20 computer engineering students who were asked to record testing executions, under strict temporal limits and no access to the source code. Results were slightly better than those of fully automated tools, but not in a conclusive way. In the second experiment, the same students were asked to improve the achieved testing coverage by exploiting the source code and the coverage obtained in the previous tests, without strict temporal constraints. The results of this second experiment showed that students outperformed the automated tools especially for long/complex execution scenarios. The obtained findings provide useful indications for deciding testing strategies that combine manual exploratory testing and automated testing.

Memory Leaks of Android Applications