I work a lot with organisations that look at process and follow the standard company line
"Excellent", I think "They've got their act together and know what they mean."
However, when it comes to the implementation of the metrics they don't seem to focus on the right things. I've had companies looking at submission processes (i.e. a process whereby something is submitted for review and approval) where they want to capture a metric for 'Number of things submitted'. I've had companies looking at processes to manage performance where they weren't looking at the actual performance, they were looking at the ability of someone to meet an arbitary performance deadline so they can say "I met this compliance metric"
Through all of this I have to ask myself why the company feels they wish to collect metrics. To distill it all down I use "Comerford's Three Laws of Metrics" to help focus thinking
1) Metrics for the sake of metrics are a waste of time: Essentially if you are gathering data about a process because someone said it's a good idea to gather this data then you are wasting your time. I don't care if you massage it, re-format it and stick it on an executives dashboard, if you're just doing it to show figures you might as well make the data up. It's more important in that case to gather some meaningful data about the process. So you managed to deal with 35 documents this month in your approval process. So what? What does this tell you about the process? It tells you that 35 documents went through it. How many of these were approved? How many rejected? What was the capacity of the process (in other words is 35 documents a lot for this process or a little)? These are the kinds of things you need to track
2) A metric which says 'I said I was going to do it and I did it' is also a waste of time: I worked with an organisation that had a performance management process which measured you on your ability to produce a certain document by a certain date. If you had your objectives completed and signed off by Feb 21st you got a little star, an 'Attaboy' and - more importantly - something that contributed towards your pay review at the end of the year. However at no point in this process was there a measure of the quality of the objectives, or even the effectiveness of the objectives. All they were concerned about from a process point of view was 'Did we complete what we said we were going to do when we said we were going to do it?'
3) If you are going to gather metrics, at least have a way of feeding them back into the process to effect change: This is, essentially, the key part of the three laws. If you are going to the trouble of actually gathering data, tracking it and reporting it, where is the part of your metrics gathering process that feeds that data back into the process and permits a change? Going back to our submission process: We have 35 documents going through, each document takes 2 days to process, 20% are rejected, 70% are approved and 10% are re-worked and resubmitted prior to a decision. This starts to become meaningful data, but if we also tracked data such as "%age of resource time spent working on processing" and "%age of processing time spent awaiting decision" we have some key data to help change the process. If we found that it is only 8% of a resource time to process a document , it means that either we have more capacity than we need for dealing with these documents, or we can, alternatively, increase the throughput of documents. However if 95% of the processing time of the document is waiting for approval then we need to feed this back to the process to understand why we have a bottleneck: Too few approval resources? Inappropriate allocation of time for approval? Technical issues in the approval process? All of these can work to feed back into the process and effect change
How many of these seem familiar to you?
"Once we have our process in place we need to ensure we measure it. Let's put some metrics around it"
"Excellent", I think "They've got their act together and know what they mean."
However, when it comes to the implementation of the metrics they don't seem to focus on the right things. I've had companies looking at submission processes (i.e. a process whereby something is submitted for review and approval) where they want to capture a metric for 'Number of things submitted'. I've had companies looking at processes to manage performance where they weren't looking at the actual performance, they were looking at the ability of someone to meet an arbitary performance deadline so they can say "I met this compliance metric"
Through all of this I have to ask myself why the company feels they wish to collect metrics. To distill it all down I use "Comerford's Three Laws of Metrics" to help focus thinking
1) Metrics for the sake of metrics are a waste of time: Essentially if you are gathering data about a process because someone said it's a good idea to gather this data then you are wasting your time. I don't care if you massage it, re-format it and stick it on an executives dashboard, if you're just doing it to show figures you might as well make the data up. It's more important in that case to gather some meaningful data about the process. So you managed to deal with 35 documents this month in your approval process. So what? What does this tell you about the process? It tells you that 35 documents went through it. How many of these were approved? How many rejected? What was the capacity of the process (in other words is 35 documents a lot for this process or a little)? These are the kinds of things you need to track
2) A metric which says 'I said I was going to do it and I did it' is also a waste of time: I worked with an organisation that had a performance management process which measured you on your ability to produce a certain document by a certain date. If you had your objectives completed and signed off by Feb 21st you got a little star, an 'Attaboy' and - more importantly - something that contributed towards your pay review at the end of the year. However at no point in this process was there a measure of the quality of the objectives, or even the effectiveness of the objectives. All they were concerned about from a process point of view was 'Did we complete what we said we were going to do when we said we were going to do it?'
3) If you are going to gather metrics, at least have a way of feeding them back into the process to effect change: This is, essentially, the key part of the three laws. If you are going to the trouble of actually gathering data, tracking it and reporting it, where is the part of your metrics gathering process that feeds that data back into the process and permits a change? Going back to our submission process: We have 35 documents going through, each document takes 2 days to process, 20% are rejected, 70% are approved and 10% are re-worked and resubmitted prior to a decision. This starts to become meaningful data, but if we also tracked data such as "%age of resource time spent working on processing" and "%age of processing time spent awaiting decision" we have some key data to help change the process. If we found that it is only 8% of a resource time to process a document , it means that either we have more capacity than we need for dealing with these documents, or we can, alternatively, increase the throughput of documents. However if 95% of the processing time of the document is waiting for approval then we need to feed this back to the process to understand why we have a bottleneck: Too few approval resources? Inappropriate allocation of time for approval? Technical issues in the approval process? All of these can work to feed back into the process and effect change
How many of these seem familiar to you?