Android GUI Ripper

Tools

The Android GUI Ripper is able:

- to automatically explore an Android application by exercising its GUI and detecting crashes of the Android application due to unhandled exceptions of its Java source code;

- to follow both systematic exploration strategies (Depth First and Breadth First) and Random strategies;

- to generate re-executable JUnit test cases;

- to abstract models of the GUI by reverse engineering, such as the GUI Tree of the explored GUIs, the Event Flow Graph (EFG);

- to measure the obtained degree of coverage of the source code.


Demo (2012)

Publications describing the GUI Ripper

Using GUI ripping for automated testing of Android applications

Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, Atif M. Memon:

Using GUI ripping for automated testing of Android applications. ASE 2012: 258-261


We present AndroidRipper, an automated technique that tests Android apps via their Graphical User Interface (GUI). AndroidRipper is based on a user-interface driven ripper that automatically explores the app's GUI with the aim of exercising the application in a structured manner. We evaluate AndroidRipper on an open-source Android app. Our results show that our GUI-based test cases are able to detect severe, previously unknown, faults in the underlying code, and the structured exploration outperforms a random approach.

A toolset for GUI testing of Android applications

Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, Gennaro Imparato:

A toolset for GUI testing of Android applications. ICSM 2012: 650-653


This paper presents a toolset for GUI testing of Android applications. The toolset is centered on a GUI ripper that systematically explores the GUI structure of an application under test with the aim of firing sequences of user events and exposing failures of the application. The toolset supports the execution of a testing procedure that automatically performs crash testing of subject applications and provides test results made of several artifacts. The paper illustrates some examples of using the toolset for testing real Android applications.

A GUI Crawling-Based Technique for Android Mobile Application Testing

Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana:
A GUI Crawling-Based Technique for Android Mobile Application Testing. ICST Workshops 2011: 252-261


As mobile applications become more complex, specific development tools and frameworks as well as cost effective testing techniques and tools will be essential to assure the development of secure, high-quality mobile applications. This paper addresses the problem of automatic testing of mobile applications developed for the Google Android platform, and presents a technique for rapid crash testing and regression testing of Android applications. The technique is based on a crawler that automatically builds a model of the application GUI and obtains test cases that can be automatically executed. The technique is supported by a tool for both crawling the application and generating the test cases. In the paper we present an example of using the technique and the tool for testing a real small size Android application that preliminary shows the effectiveness and usability of the proposed testing approach.

Publications presenting variations of the GUI Ripper

MobiGUITAR: Automated Model-Based Testing of Mobile Apps

Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Bryan Dzung Ta, Atif M. Memon:

MobiGUITAR: Automated Model-Based Testing of Mobile Apps. IEEE Softw. 32(5): 53-59 (2015)


As mobile devices become increasingly smarter and more powerful, so too must the engineering of their software. User-interface-driven system testing of these devices is gaining popularity, with each vendor releasing some automation tool. However, these tools are inappropriate for amateur programmers, an increasing portion of app developers. MobiGUITAR (Mobile GUI Testing Framework) provides automated GUI-driven testing of Android apps. It's based on observation, extraction, and abstraction of GUI widgets' run-time state. The abstraction is a scalable state machine model that, together with test coverage criteria, provides a way to automatically generate test cases. When applied to four open-source Android apps, MobiGUITAR automatically generated and executed 7,711 test cases and reported 10 new bugs. Some bugs were Android-specific, stemming from the event- and activity-driven nature of Android.


Considering Context Events in Event-Based Testing of Mobile Applications

Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Nicola Amatucci:

Considering Context Events in Event-Based Testing of Mobile Applications. ICST Workshops 2013: 126-133


A relevant complexity factor in developing and testing mobile apps is given by their sensibility to changes in the context in which they run. As an example, apps running on a smartphone can be influenced by location changes, phone calls, device movements and many other typologies of context events. In this paper, we address the problem of testing a mobile app as an event-driven system by taking into account both context events and GUI events. We present approaches based on the definition of reusable event patterns for the manual and automatic generation of test cases for mobile app testing. One of the proposed testing techniques, based on a systematic and automatic exploration of the behaviour of an Android app, has been implemented and some preliminary case studies on real apps have been carried out in order to explore their effectiveness.

AGRippin: a novel search based testing technique for Android applications

Domenico Amalfitano, Nicola Amatucci, Anna Rita Fasolino, Porfirio Tramontana:

AGRippin: a novel search based testing technique for Android applications. DeMobile@SIGSOFT FSE 2015: 5-12


Recent studies have shown a remarkable need for testing automation techniques in the context of mobile applications. The main contributions in literature in the field of testing automation regard techniques such as Capture/Replay, Model Based, Model Learning and Random techniques. Unfortunately, only the last two typologies of techniques are applicable if no previous knowledge about the application under testing is available. Random techniques are able to generate effective test suites (in terms of source code coverage) but they need a remarkable effort in terms of machine time and the tests they generate are quite inefficient due to their redundancy. Model Learning techniques generate more efficient test suites but often they do not not reach good levels of coverage. In order to generate test suites that are both effective and efficient, we propose in this paper AGRippin, a novel Search Based Testing technique founded on the combination of genetic and hill climbing techniques. We carried out a case study involving five open source Android applications that has demonstrated how the proposed technique is able to generate test suites that are more effective and efficient than the ones generated by a Model Learning technique.

A Technique for Parallel GUI Testing of Android Applications

Porfirio Tramontana, Nicola Amatucci, Anna Rita Fasolino:

A Technique for Parallel GUI Testing of Android Applications. ICTSS 2020: 169-185


There is a large need for effective and efficient testing processes and tools for mobile applications, due to their continuous evolution and to the sensitivity of their users to failures. Industries and researchers focus their effort to the realization of effective fully automatic testing techniques for mobile applications. Many of the proposed testing techniques lack in efficiency because their algorithms cannot be executed in parallel. In particular, Active Learning testing techniques usually relay on sequential algorithms.

In this paper we propose a Active Learning technique for the fully automatic exploration and testing of Android applications, that parallelizes and improves a general algorithm proposed in the literature. The novel parallel algorithm has been implemented in the context of a prototype tool exploiting a component-based architecture, and has been experimentally evaluated on 3 open source Android applications by varying different deployment configurations.

The measured results have shown the feasibility of the proposed technique and an average saving in testing time between 33% (deploying two testing resources) and about 80% (deploying 12 testing resources).


Publications describing researches exploiting GUI Ripper explorations

A general framework for comparing automatic testing techniques of Android mobile apps

Domenico Amalfitano, Nicola Amatucci, Atif M. Memon, Porfirio Tramontana, Anna Rita Fasolino:

A general framework for comparing automatic testing techniques of Android mobile apps. J. Syst. Softw. 125: 322-343 (2017)


As an increasing number of new techniques are developed for quality assurance of Android applications (apps), there is a need to evaluate and empirically compare them. Researchers as well as practitioners will be able to use the results of such comparative studies to answer questions such as, “What technique should I use to test my app?” Unfortunately, there is a severe lack of rigorous empirical studies on this subject. In this paper, for the first time, we present an empirical study comparing all existing fully automatic “online” testing techniques developed for the Android platform. We do so by first reformulating each technique within the context of a general framework. We recognize the commonalities between the techniques to develop the framework. We then use the salient features of each technique to develop parameters of the framework. The result is a general recasting of all existing approaches in a plug-in based formulation, allowing us to vary the parameters to create instances of each technique, and empirically evaluate them on a common set of subjects. Our results show that (1) the proposed general framework abstracts all the common characteristics of online testing techniques proposed in the literature, (2) it can be exploited to design experiments aimed at performing objective comparisons among different online testing approaches and (3) some parameters that we have identified influence the performance of the testing techniques.

A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

Domenico Amalfitano, Nicola Amatucci, Anna Rita Fasolino, Porfirio Tramontana:

A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques. ASE Workshops 2015: 50-57

Fully automated GUI testing techniques play an important role in the modern software development life cycles. These techniques are implemented by algorithms that automatically traverse the GUI by interacting with it, like robots discovering unexplored spaces. These algorithms are able to define and run test cases on the fly, while the application is in execution. Testing adequacy, performance or costs of such techniques may differ on the basis of different factors. In this paper we will propose an approach for comparing fully automated GUI testing techniques in a systematic manner. The approach is based on a generalized parametric algorithm that abstracts the key aspects of these techniques and provides a conceptual framework that can be used to define and compare different testing approaches. To validate the framework, we exploit it to compare the testing adequacy and the GUI models inferred by 9 fully automated testing techniques obtained by varying the configuration of the algorithm. The experiment is performed on a real Android application.

Exploiting the Saturation Effect in Automatic Random Testing of Android Applications

Domenico Amalfitano, Nicola Amatucci, Anna Rita Fasolino, Porfirio Tramontana, Emily Kowalczyk, Atif M. Memon:
Exploiting the Saturation Effect in Automatic Random Testing of Android Applications. MOBILESoft 2015: 33-43


Monkey Fuzz Testing (MFT), a form of random testing, continues to gain popularity to test Android apps because of its ease of use. (Untrained) programmers use MFT tools to fully automatically detect certain classes of faults in apps. A challenge for these tools is the lack of a stopping criterion -- programmers currently typically stop these tools when they run out of time. In this paper, we use the notion of the Saturation Effect of an MFT tool on an app under test to define a stopping criterion, parameterized by the app's preconditions and the tool's configurations. We have implemented our approach in the Android Ripper MFT tool. We experimentally report results on 18 real Android app subjects. We show that the saturation effect is able to stop testing when test adequacy has been achieved without wasting test cycles.


Comparing the effectiveness of capture and replay against automatic input generation for Android graphical user interface testing

Di Martino, S., Fasolino, A.R., Starace, L.L.L., Tramontana, P.

Comparing the effectiveness of capture and replay against automatic input generation for Android graphical user interface testing (2020) Software Testing Verification and Reliability, . DOI: 10.1002/stvr.1754


Exploratory testing and fully automated testing tools represent two viable and cheap alternatives to traditional test‐case‐based approaches for graphical user interface (GUI) testing of Android apps. The former can be executed by capture and replay tools that directly translate execution scenarios registered by testers in test cases, without requiring preliminary test‐case design and advanced programming/testing skills. The latter tools are able to test Android GUIs without tester intervention. Even if these two strategies are widely employed, to the best of our knowledge, no empirical investigation has been performed to compare their performance and obtain useful insights for a project manager to establish an effective testing strategy. In this paper, we present two experiments we carried out to compare the effectiveness of exploratory testing approaches using a capture and replay tool (Robotium Recorder) against three freely available automatic testing tools (AndroidRipper, Sapienz, and Google Robo). The first experiment involved 20 computer engineering students who were asked to record testing executions, under strict temporal limits and no access to the source code. Results were slightly better than those of fully automated tools, but not in a conclusive way. In the second experiment, the same students were asked to improve the achieved testing coverage by exploiting the source code and the coverage obtained in the previous tests, without strict temporal constraints. The results of this second experiment showed that students outperformed the automated tools especially for long/complex execution scenarios. The obtained findings provide useful indications for deciding testing strategies that combine manual exploratory testing and automated testing.