Part 1: Process Mining to Implement Better DevOps
Sage business gurus tell us that what gets measured gets done. But how, exactly, to measure success? Agreeing on what and how to measure is challenging. Even more so, turning measurement into action without producing unwanted side-effects is . . . almost impossible.
In this four-part series, we’ll examine using process mining techniques as a way to get information about how our software development, testing, delivery, and support processes are working. Mining and analysis of data from the tools that support these environments will help implement a DevOps lifecycle and improve the ability to deliver value to the customer.
Part one will explain process mining and provide an example. In later installments in the series, we’ll examine case studies, identify key benefits, and discuss how to implement process mining.
Process Mining Insight
Typically, organizations don’t understand how they do their work. If administrative processes aren’t understood, software development and maintenance processes are even less so. For an organization doing agile software development, there is more focus on creating software and less focus on documenting the process. And if there is documentation, then it describes the ideal process, not the real process that is actually implemented. The connection between how software development and support are done and the quality and usefulness of the software products and information resulting from that work is murkier still.
One of DevOps’ goals is to illuminate the process path from user stories to development to testing to delivery and maintenance. Most tools in a DevOps toolchain provide some limited insight regarding each particular tool’s operation, but don’t give information across the toolchain.
Collecting and analyzing data at depth across the toolchain can help answer the following questions:
- How is the development team working together to deliver new features? Not just working together at the visible level, but also collectively sharing code, focusing on progress, and collaborating at the user story level?
- What are the characteristics of the delivered software code, such as security flaws? What is the relationship of the code characteristics to how the code was developed?
- Is the team able to keep its promises to its customers? Is the code that the team delivers right the first time?
- Who on the team consistently takes on more work that they can handle? Who could possibly do more? Is there a single-point-of-failure or a bottleneck on the team?
- Are there any areas in the code that will cause an unexpected wave of refactoring or rework?
- Is your DevOps process addressing similar challenges consistently? Is the DevOps process being updated to reflect past success or failures or is there only limited learning?
Data mining has led to a deeper understanding of available information. Process mining leads to a better understanding of how DevOps is functioning to deliver value to customers by using a complete, high-integrity data collection and analysis model to understand and visualize actual processes.
Process mining can:
- Illuminate the relationship of how the processes used led to the resulting software product. Using historical process data and code characteristics, process mining reveals vulnerabilities and abnormalities that result in more defects and less security. This analysis helps implement high-productivity and high-responsiveness life cycles like DevOps.
- Reconstruct sequences of events for a process that was executed. The information obtained from the toolchain is used to construct a visual model of the process and determine key characteristics such as throughput, latency, and bottlenecks. The model also shows exceptions and defects rates that are traceable to particular process steps. This enables the team to identify and eliminate the root causes of issues.
- Increase the efficiency, integrity, and objectivity of information about the process. Process mining reveals a project’s processes by collecting and analyzing the electronic transactional footprint left behind by doing work. The traditional way to do this is by using a process consultant—which is usually time-consuming, expensive, and may reflect the consultant’s interpretations. Just the facts, right? Better data faster and at less cost leads to better, faster results.
As a teaser for part 2 of this series, consider the following process flow map. This map is automatically built from data taken from a real project. Can you spot the process bottlenecks? How about the rework loop? Read part 2 to find out more!