In technology, we are bombarded with stats, percentages, KPI’s and scorecards.  Things like customer satisfaction rates, cost per transaction, RIO, response time, time to close, and system uptime are all part of our vernacular. Interestingly enough, we have blind faith in the accuracy of these numbers. Often our department’s success is determined by a few numbers that are trusted without proper scrutiny. We set goals and pay bonuses on these metrics. Have you ever stopped to review these numbers to determine if they are providing us with the truth?

The deluge of data and statistics is used to validate statements, sell products and sway the thoughts of the Executive. This can be a dangerous strategy without the data being put through some validation. Placing any information in such high regard without a proper review leaves an organization with the risk of potentially creating improper behaviours or generating unknown mistakes.

A key example of such a situation lies in the analysis of satisfaction surveys.  These are commonly used by organizations to measure their IT operations. Assume conducting a satisfaction survey and achieving a score of 90%. It appears the organization is pleased with its IT service. You may be happy with this result, as you hit your KPI target and everyone gets their bonus. However, is it a true measurement of customer satisfaction? It all depends on the question and how it is asked.  If the question was, “Are you very unhappy with the IT department?”, then the results may be suspect as it does not provide a strong representation of satisfaction, only of displeasure.

Furthermore, if the survey is completed after an incident is closed, then the results will not include people who have unresolved issues, thus producing a bias result.

In an article in the New York Times magazine, John Allen Paulos discussed this concept of false data results titled “Metric Mania” the importance of how data is counted and aggregated. The example he uses is the five year survival rate in diseases of the elderly. He explains:

Suppose that whenever people contract the disease, they always get it in their mid-60s and live to the age of 75. In the first region, an early screening program detects such people in their 60s. Because these people live to age 75, the five-year survival rate is 100 percent.

“People in the second region are not screened and thus do not receive their diagnoses until symptoms develop in their early 70s, but they too die at 75, so their five-year survival rate is 0 percent.”

He concludes “The laissez-faire approach thus yields the same results as the universal screening program, yet if five-year survival were the criterion for effectiveness, universal screening would be deemed the best practice.” This would therefore create a false impression of the process.

To avoid the same inaccurate conclusions with your data, here are 4 questions that should be answered for all data being gathered and applied for KPI’s or metrics:

1. Why is the data being collected?

Review the underling purpose for the measurement or metric. What are you trying to achieve with the knowledge you are seeking?  You need to first know the question you need answered before you ask it. This may seem simple, but in fact this is where many organizations fall short in their data collection.  Be very specific with the wording of the question so that you obtain the relevant knowledge you are seeking from those who answer your question. The more detailed the question, the more reliable the answer will be, and the better the analysis from the data collected.  Your goals with data collection are to ensure you are paying bonuses to individuals meeting the actual targets your organization needs to succeed.

2. How is the data being used and measured?

Whether you are improving a process or monitoring activities on an on-going basis, understanding how the date you are collecting is to be used is critical to how the data is sliced and diced. Also, benchmarks are critical for any analysis, especially if you are looking to change a process. Baseline numbers allow for a point of time reference to compare. So ensure your baseline is in place before collecting or analysing new data.

3. Who is gathering the data?

A standard approach to data collection is always paramount and is of greater importance with a larger number of people collecting the data. The more people involved in data collection, the greater the potential for error. Proper training should be considered to ensure the data quality.

4. Where & when is the data collected?

Again, planning is critical to answer the question you wish to answer. Is the data collected real-time, once a week, once a month? Everything has a cost and data collection is no different. As part of the decision regarding data collection, your organization will need to consider the costs associated with location and frequency.

In summary, if you are using data for benchmarking, metrics and scorecards to make management decisions and/or bonus payments, it is recommended that the data collected must be reviewed to answer the questions above. In the very least, someone should take the time to review data and analysis. For some organizations, simply having these metrics would be a big step. But, if your company uses KPI’s or metrics as decision making tools, you will need to question the data like a reporter to determine how the data is gathered.