Since its initial commit few years back our Smart ID Desktop App (windows universal application developed in Nexus) has been a dynamically developing project. As it grew over time, naturally, the resources to properly test the software grew with it. For example, not long time ago we have included also hardware machines into our test network (the set of machines we run tests on), hence greatly increasing the impact of our tests but also its costs. Hence, we decided it was the right time to invest in automated testing. Let me tell you a lot has been done since! In the text below I shall story-tell about our current progress. At the bottom you can also watch a short demo video (note: its first one I ever made).
The road-map of our testing solution is divided in three major layers. I am currently working on the middle one, hence I will talk about it in more details.
Bottom Layer: Writing individual tests
Naturally the first step is to prepare an application where you can write the individual UI tests. Here the workload is two-fold. First we had to provide an adequate mock of the server called Smart Id Identity manager (originally Prime) that interacts with the Hermod and essentially drives the application we test. This part is already fully integrated and running. Second the UI tests themselves. Since we are testing UWP application, our tests are based on WinAppDriver REST service and Appium/Selenium SDK. Currently we are still extending the number of our tests. However as the code base is nearing completion, adding new tests is relatively a straightforward task.
Mid Layer: Test network puppet master (written as a PowerShell module)
Originally our test network used to contain one ESXI virtual machine for each Windows version . However, recently, we discovered that some particular machines tend to exhibit errors while operating with private keys stored on virtual smart cards. We later realized the problem was related to the machine's security chips TPM (Trusted Platform Module). Eventually we extended the network by hardware machines chosen for their TPM version and manufacturer (Infineon, ST Microelectronics, Intel, and AMD). Overall we are testing on approximately 15 machines. I believe everyone can imagine that even with the automated tests available, its an extremely time consuming task to handle all of those individually.
So the plan arose to build a tool that could remotely control all the machines (both ESXI and hardware) as simply as possible with the following capabilities:
Set machines to some predefined state to ensure stable testing conditions.
Distribute and install/uninstall related software: WinAppDriver, Visual studio test platform, and the app itself.
Run the tests (more than 60 individual tests at the moment).
Collect data: there are 3 different logs for each machine (Smart ID Desktop app, test app, and test results).
Process the results.
Let us briefly ponder the solution. Since the major hurdle here are the remote operations, command line interface was the natural candidate and, since we are dealing with Windows, PowerShell is the best fit. In fact, the final outcome is a PowerShell module (a script that after import introduces new functions and variables to the PowerShell console). The module is installed on a server containing test DLLs, related PowerShell scripts (e.g. for software install or data harvesting), and MSI files. Test machines are controlled by running remote scripts via invoke-command (hardware machines) and invoke-VMscript (virtual machines). As usual to get around Windows restrictions, the scripts are invoked by Task Scheduler - a generally useful trick that no Windows Admins approve of. Note that PowerShell enables running operations in the background by the use of start-job command. This means that there is no blocking or waiting while you cannot do anything. Also you can see the current progress bar!
Now that the tests are finished, the next task is to harvest the results and make them look nice. This is provided by a background task initiated on each machine at the test startup. The task sends the test results data back to the master at runtime. To enhance readability, we eventually intend to organize the data into HTML tables. My goal is for the resulting module to be easy and intuitive and powerful at the same time. You can judge my results against my goals in short video below.
Other potential features and uses of the module:
Generic implementation - This solution is not tied explicitly to one application or test. Hence eventually the module might be useful beyond Smart ID Desktop App testing.
Run the test on customers' machines - It is very easy to deploy the module on a any network. As you may know, our customers often use diverse and unexpected windows configurations. The module could possibly be used be during integration/testing phase to get ahead of potential issues later on when real people get exposed.
Top Layer: Azure pipelines
Naturally it is not enough to just be able to run the tests on multiple machines with one click. It is one click too many! The next step is to run the tests after each commit in our build system in the Azure pipelines. This should be an easy task to implement, it seems enough to prepare an agent to alert our test server after each successful build and provide it a new version of our app.
Almost one and half year later, the test network driven by the puppet master, now known as Nexus Mango, has grown to be invaluable means of quality assurance for our product. With a single powershell command, we can launch +-500 tests on each of the 14 machines (10 of them virtual) in the test network (the full test bunch runs for +- 10 hours). The same time a background process is collecting logs and results, which are automatically reorganized into well-arranged html, thus enabling fast and convenient analysis of the apps defects.
Currently, every new feature introduced is immediately implemented on the automated testing side. Needless to say, the reliability of the newly released versions has grown beyond belief.
Using the module (July 2020)- 4 ESXI virtual machines (top of the screen) + 4 laptops (3 of them behind me)