This site aims to complement the material published at:
Graafmans, T., Turetken, O., Poppelaars, H. et al. (2021) Process Mining for Six Sigma. Business Information Systems Engineering. 63, 277–300 (2021). https://doi.org/10.1007/s12599-020-00649-w
In this section, we demonstrate the PMSS and tool support by going through a real-life business case that involves an organization (ABC Inc.) adopting Six Sigma quality management framework and following the DMAIC model.
In the Define phase, the ABC performs the planning, preliminary data preparation, and exploratory mining & analysis activities. For the planning, the ABC considers a number of complaints that it has received from its suppliers about invoicing, and conducts short discussions with a number of employees that are involved in the related process with the main objective of defining the scope for the initiative and the goals that it aims to attain. As result, ABC considers focusing on its invoicing process, which is deemed to suffer from long processing times. It sets up a team involving a number of business users, internal data and process analysts, and a team leader appointed as the project/process owner.
The planning is facilitated by the preliminary data preparation and exploratory mining & analysis activities in order to provide a better understanding of the business problem in the invoicing process. Relevant data about the invoicing is extracted from ABC’s current enterprise information system, prepared and loaded in the PMSS tool support for the subsequent exploratory mining & analysis. At this stage, the data requirements are kept minimum to expedite the mining and analysis. (For the sake of brevity, we do not show the screens of the application components that show the loading of the data).
Figure 1 shows the screen for the Define dashboard with three elements (numbered 1-3). The first element shows the process graph, the second shows the average processing times for different case types, and the third shows how much certain activities or resources contribute to the average processing time. At the bottom of the second element, we can see that the average processing time is 4 days when all 1855 cases are considered. The avg. processing time is highest for medium invoice cases (9 days) and lowest for employee declaration cases (2.3 hours).
Figure 1. Define Dashboard
Checking the process graph (1st element), the team notices a considerable repetitions in certain activities as a sign of excessive rework, which would potentially lead to longer processing times. For further analysis, the team selects one of the arrows that indicates a repetition of a certain activity (i.e., the activity ‘final check of invoice’). Figure 2 shows the screen where the arrow is selected, and Figure 3 shows the screen after the selection is approved by the user.
Figure 2. Selecting the repetition of an activity
As can be seen in Figure 3, when the process instances (cases) with repetition of this activity is selected, the average processing time doubles to 8 days.
Figure 3. Process instances with repeated activity selected
Performing similar analyses on other activity repetitions (and possibly more elaborate analysis of the information shown in the charts) confirms the initial considerations regarding the potential cause of long processing times. Having gained an initial understanding of how the process has been executed, its performance and potential problems, the team closes the Define phase and moves onwards to the next DMAIC phase.
At the Measure phase, the data analyst takes the lead to extract additional process execution-related data from ABC’s enterprise system for enriching the event log (e.g., with data from an extended period of time and type of cases), and verify that the data is correct and valid. (Again for the sake of brevity, the application components used for the loading of the data is not shown.) Figure 4 shows the dashboard for the Measure phase. The data analyst uses the dashboard to check if the data loaded into the application is correct (for instance, by comparing the numbers of cases, activities, and users with those in ABC’s enterprise system, where the data has been extracted from). Furthermore, the open and close cases are rechecked to ensure that the cases started and closed as expected.
Figure 4. Measure Dashboard
At the top right of Figure 4, the number of activity repetitions per case is shown. The fact that, on average, each activity in a case is repeated 1.14 times, provides further suggestions for the cause of the problem.
When the team verifies the correctness of the data, they move to the Analyze phase. Figure 5 shows the dashboard for this phase, which consists of two views: the overview and statistics. In the overview (as shown in Figure 5), the cases that are problematic are compared to the other cases. In the case of ABC’s invoicing process, problem cases refer to those process executions that feature activity repetitions, and, in turn, rework. Hence, the overview dashboard shown in Figure 5 is tailored to reflect the Six Sigma waste problem at hand, i.e., rework (as opposed to, for instance, inefficiency).
Figure 5. Analyze Dashboard – Overview Tab
The overview dashboard features a process graph (element #1 in Figure 5) to determine if the paths of the problem cases with rework (in purple) deviate significantly from the paths of other cases (in orange). In addition, the team analyzes a number of graphs each differentiated for problem cases and other cases. They analyze the average processing times for different cases (#2), the resources involved in the process to check if a particular resource is a major contributor to the problem (#3), and the number of times on the average an activity occurs in the cases (#4). The graph in dashboard element #5 shows the progress of ‘defects per million cases’ (DPMO) – a metric that is of particular importance for the Six Sigma initiatives. The graph in element #6 shows the workload, i.e., the percentage of cases that are considered problematic with regard to the repetitions.
The team sees that the average processing time (#2) is higher for the problem cases than for the other cases for majority of the case types. The same situation holds for the number of activity repetitions per case (#3). The DPMO metric, which is expected to be 3.4 for Six Sigma, is considerably high – around 175k.
When the team looks further in the process graph (Figure 6), they notice the repetitions (self-pointing purple arrows) as the primary reflection of the rework problem. In addition, activities such as the ‘repeat payment process’ are also considered rework and colored in purple.
Figure 6. Process graph (in the Analyze Dashboard) enlarged
Minimizing the process graph and navigating back to the dashboard, the team selects the ‘system’ user in the resources table (element #3 as shown in Figure 7). Accordingly, the system user is involved in around 20% of the activities both in the problem cases and other cases. However, when they check the upper part (element #2), they see that the average processing times for the problem cases are significantly higher for the problem cases, which provides another perspective for the problem.
Figure 7. Analyze Dashboard Overview Tab – Filtered for a specific user
Next, the team navigates to the ‘statistics’ part of the Analysis dashboard (Figure 8), which consists of two views. Using these two views, the team aims to find improvement opportunities to act upon. The chart on the left shows a Pareto chart (the red line is the cumulative percentage) for two events - ‘multiple final checks needed’ and ‘multiple payment attempts’- pointing to the causes of the problem. This chart can be used to locate the causes that contribute most to the problem. The team uses the dashboard element on the right-hand side to check where the problem originates from. They identify that the majority of the problem cases originate from the ‘small invoice’ cases. Using these two views, the team identify improvement opportunities to act upon. To select the one with the biggest impact, they move to the Improve phase.
Figure 8. Analyze Dashboard – Statistics Tab
At the Improve phase, the team uses the tool support in detecting the likely impact of different improvement options and in selecting the ones that would potentially bring the highest impact when implemented. Figure 9 presents the dashboard for Improve. The upper part of the dashboard (element #1) demonstrates the contribution of the cases which have been assigned a cause to certain metrics. The graph shows the contribution of two process elements with regard to the average processing times. Clearly, “multiple final checks needed” contributes the most. This indicates that an improvement at this part of the process would likely offer the biggest impact on the processing times.
The charts in the bottom part of the screen (element #2) present their contribution over time. Accordingly, the team decides to focus particularly on this for improvement. We should note that the time in which this decision was taken is March 2016.
Figure 9. Improve Dashboard
The main action at the Control phase is to use process mining techniques to monitor the predefined process performance indicators (in ABC’s case, it is the processing time) to help evaluate if the implemented improvement actions yielded the expected results.
Figure 10 presents a selection of charts that shows the performance (with respect to a predefined indicator) plotted over time. The green vertical line shows the time (March 2016), where the improvements planned by the team were put in action in the invoice processing of ABC. As seen in the figure, the actions had a positive influence on the average processing time of the process. The process owner continues to monitor the performance of the process and seek opportunities for improvements.
Figure 10. Control Dashboard
PMSS has been developed by Eindhoven University of Technology, Infornation System Group and ProcessGold. For more information, please contact: o.turetken@tue.nl