Software applications (apps) have been playing an increasingly important role in various aspects of society. In particular, mobile apps and web apps are the most prevalent among all applications and are widely used in various industries as well as in people's daily lives. To help ensure mobile and web app quality, many approaches have been introduced to improve app GUI testing via automated exploration, including random testing, model-based testing, learning-based testing, etc. Despite the extensive effort, existing approaches are still limited in reaching high code coverage, constructing high-quality models, and being generally applicable. Reinforcement learning-based approaches, as a group of representative and advanced approaches for automated GUI exploration testing, are faced with rough challenges, including effective app state abstraction, reward function design, etc. Moreover, they heavily depend on the specific execution platforms (i.e., Android or Web), thus leading to poor generalizability and being unable to adapt to different platforms.
This work specifically tackles these challenges based on the high-level observation that apps from distinct platforms share commonalities in GUI design. Indeed, we propose PIRLTest, an effective platform-independent approach for app testing. Specifically, PIRLTest utilizes computer vision and reinforcement learning techniques in a novel, synergistic manner for automated testing. It extracts the GUI widgets from GUI pages and characterizes the corresponding GUI layouts, embedding the GUI pages as states. The app GUI state combines the macroscopic perspective (app GUI layout) and the microscopic perspective (app GUI widget), and attaches the critical semantic information from GUI images. This enables PIRLTest to be platform-independent and makes the testing approach generally applicable on different platforms. PIRLTest explores apps with the guidance of a curiosity-driven strategy, which uses a Q-network to estimate the values of specific state-action pairs to encourage more exploration in uncovered pages without platform dependency. The exploration will be assigned with rewards for all actions, which are designed considering both the app GUI states and the concrete widgets, to help the framework explore more uncovered pages. We conduct an empirical study on 20 mobile apps and 5 web apps, and the results show that PIRLTest is a zero-cost approach when adapting different platforms, and can perform better than the baselines, covering 6.3% - 41.4% more code on mobile apps and 1.5% - 51.1% more code on web apps. PIRLTest is capable of detecting 128 distinct bugs on mobile and web apps, including about 100 bugs that cannot be detected by the baselines.
We introduce a platform-independent GUI testing approach via image embedding and reinforcement learning, which is zero-cost when adapted to different platforms.
We propose a novel algorithm to abstract the app GUI states with the GUI image embedding, including the widget extraction (microscopic perspective) and the layout characterization (macroscopic perspective). This algorithm helps effectively characterize the States in the RL model.
We propose a novel reward function design in the RL framework, with a comprehensive consideration of the exploration of both app GUI as a whole and concrete GUI widgets.
We implement a tool and conduct an empirical evaluation of the effectiveness of PIRLTest on both mobile apps and web apps, which shows the outstanding performance of PIRLTest over the representative baselines.
Code Repository URL: https://github.com/iGUITest/PIRLTest
Code Repository URL: https://github.com/NJU-iSE-BigCode2/PIRLTest-paper/blob/main/Detailed%20Result.xlsx