Even the Tiniest Error Can Cost a Company Millions
Mo Data stashed this in Analysis Tips and Tricks
Think of your data as a mound of rocks. All managers know they need to be able to sort the rocks and count the rocks; but the best ones can also turn over each rock to see what crawls out. By doing so, you can make some startling discoveries.
An example of this go-deeper approach comes from AT&T, where Bob Pautke, manager of Access Financial Assurance, and his team were charged with ensuring that AT&T paid the right amount for certain services it purchased from other telephone companies. Their job was easy to define but tough to perform. The services were complex, the sheer number of invoices was high, and there were many errors. AT&T feared that it was overpaying, possibly by tens of millions of dollars.
The obvious approach was to search for errors by inspecting each invoice using internal data sources that estimated each invoice. Unfortunately, this approach was not up to the challenge. While some errors were easy to spot, internal sources often proved unreliable, and many suspected errors leaked through. Further, proving an invoice incorrect was expensive, and resolution took too long.
A new way of approaching the problem was needed, so Bob and his team expanded their scope from inspecting the invoices themselves to evaluating the entire process that created them. You’ll get correct invoices when the end-to-end process works perfectly, the first time and every time. Conversely, an error on an invoice had to stem from an error in the process.
Since nobody knew precisely what happened as the work proceeded, Bob and his team conducted a tracking study by simply observing what happened to the data created and processed at each step. To start, Bob’s team compiled 20 tracked records and studied them, looking at individual records and calling out anomalies. In each case, they identified something that “just didn’t look right.” The figure below features a portion of one of the tracked records and highlights, in blue, four instances where something changed that they did not expect to change as this data wended its way through the process. (Note: Data has been disguised to protect AT&T’s proprietary information.)
The first two changes (from XYZ.1234 to XYZ-1234 and from 1 to A) involved reformatting the data during Step B. They discovered a number of small changes like this as they looked through the data. Some were annoying, but none appeared to impact invoices. The other two changes, though, were more substantial. The billing number and office number changed mid-process. These changed the meaning in the data and impacted the invoice.
While this tracking method had not yet cracked the original business problem — to ensure that AT&T paid the right amount for services — it had produced plenty of potentially interesting “rocks” to turn over. In so doing, it changed the dialog. The questions were no longer simply, Is this invoice correct? and If not, how much is it off by? Instead, they had become, How bad is the process?Where is it broken? and How do we fix it? Sometimes when you turn over a rock, what emerges is not an answer to an existing question, but a better question.
Thus, Bob and his team sought to develop deeper insights into the frequency and severity of errors. They automated data collection and began to look for overarching patterns.
They started by using visuals such as time-series and Pareto plots to gain insights into such questions. The figure below helped answer the first question: How bad is the process? It showed that, on average, only 40% of the data records made it all the way through the process without error. Clearly, underlying process problems were enormous and pervasive.
As the extent of the problem became clearer, they turned their attention to the second issue: where errors occurred. In many cases, as you see in the figure labeled “Process Performance By Administrative Region,” the visuals yielded no particular insight. But the figure labeled “Process Performance By Attribute” proved more fruitful — it revealed that the vast majority of problems occurred in a relatively few attributes.
In a separate analysis, Bob and his team discovered that the vast majority of problems occurred on the interfaces between steps C and D, and D and E. Then, combining this insight with what they already knew about attributes, they were able to identify exactly where to target improvements, in effect providing a very precise answer to where the process was broken.
Improvement teams were then tasked with addressing how to fix the issues. As these teams completed their work, end-to-end process performance and invoice quality improved. Bob’s original task — identifying if the company was accurately paying what was owed — was now much easier. And, not surprisingly, the company saved tens of millions along the way.
You can take these steps in your own organization, as you dive into your data.
- First, identify the business problem and ask yourself what hidden assumptions constrain your efforts to address that problem. In Bob’s case, they were looking to discover whether they were paying exactly what was owed but were unsure whether verifying invoices was the best way to do so.
- Then, find or create relevant data and test those assumptions. Bob’s team was able to track records, testing whether process errors lead to invoice errors.
- Next, dig into the data and let new questions emerge. In their search, Bob’s team discovered three more important questions, which revealed larger process issues.
- Last, find solutions. Now that the data had brought issues to the surface, take steps to fix these problems — and improve your business in the process.
There is nothing magical about any of this. While Bob and his team were smart, articulate, and hardworking, they had only the most rudimentary quantitative skills when they started. But they learned a few basic ways to turn over rocks and challenge conventional wisdom, and new and unexpected information crawled out.