!Process Cafe Process Cafe

Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Selfishness and Process.

CockpitBack in the mists of time I used to have a valid private pilots licence (PPL). One of the things I wanted to do was get a share of an aircraft so I could increase my flying hours and learn more about this great pastime of flying. I inquired at the local airport and found a great little plane that was within my price range at the time.

But when I looked into the small print of the costs there appeared to be something which - at first - struck me as strange, but than raised a red flag.

As a bit of background: when using a plane there is always a cost involved. This cost varies depending on whether you own a piece of the plane or are borrowing it without an equity involvement. But regardless of whether you are, or are not, there is always the cost of fuel that needs to be factored in. In many cases you take the plane, use it for however long you want and then fill it up when finished, at your own cost. In some instances you can fill it up on a company account and it gets charged to your costs or - and this was the case with the plane I was looking at - you were charged a fixed amount per hour for fuel, regardless of what you used.

The problem with this was the behaviours it promoted. Imagine the situation: regardless of how fast or how far you fly, you are going to be charged a set amount of money for fuel. This was not an insignificant amount either. It equated to 55 litres of fuel per hour for a single engine Cessna. (That's pretty much the same amount of fuel as my road car uses in about 8 days - and aviation fuel is more expensive). So in that situation, would you take it easy with the plane? Would you lean the fuel mixture to be as conservative as possible? Or would you just slam those throttles open and scream across the sky as fast as you could and as inefficiently as you could? Human nature would expect us to do the latter rather than the former. After all if there are ten people renting this plane in a week and they are all paying the same amount per hour, why should you be the one who only uses 75% as much fuel as them but still gets charged for 100%?

There was a knock on effect of this, too. The amount per hour was calculated on the average fuel usage that the owners of the plane were being charged. (They paid for all the fuel themselves and offset it against the hourly fuel rate). But the rate they set was based on current behaviour rather than future expected behaviour. Which means that after a few months of running at the new fuel amount (55 litres per hour), they discovered that they were actually paying for 70 litres per hour of usage (for example). Therefore the owners were losing out on this deal. So what did they do? That's right, the increased the hourly fuel charge to 75 litres regardless of actual usage.

Can you see how this would then become a bit of a problem? The upshot was that a lot of the guys who were renting the plane found that the fuel cost became prohibitive (even though they were using that much fuel per hour), and stopped flying. The owners then started to lose out on the rental charges for the plane. When it came to doing routine maintenance such as replacing the engine or the propellor, they found they had to stump up the money from their own pockets.

As I thought about this today I realised that there is a lesson in there from a process point of view. The process that was being initiated had measures or metrics that were - effectively - Key Performance Indicators for the whole process. They determined the efficacy of the process (after all, if the plane was using more than 55 litres of fuel per hour then the ROI on the process was reduced). But what had happened to this process was that it had become driven by the KPI rather than measured by the KPI. This was an occasion when the adage "What gets measured gets rewarded" did not apply. Quite the opposite intact.

The process had been designed with a flaw in it. The measure was inappropriate for the process. A more correct measure would have been to remove the fixed cost per hour and replace it with a variable cost based on actual usage. This added a small administrative burden to the billing process, but resulted in more flexible (and better run) flights, where the fuel usage wasn't excessive.

It might be worth having a look around your current process and seeing if there is anything in there which is working in a counterintuitive way. Do you have any process steps that are time dependent and allow a larger amount of time than is needed? Could you cut down that time to enforce the right behaviours of efficiency and speed? Could you organise your workflow in such a way that you are not encouraging unwanted behaviours from the participants.

You might be surprised at what you can change.

Reminder: 'The Perfect Process Project Second Edition' is now available. Don't miss the chance to get this valuable insight into how to make business processes work for you. Click this link and follow the instructions to get this book.

All information is Copyright (C) G Comerford
  See related info below

Top posts for January 2012

Nailed it!  My picture is a perfect 10!
The top posts on The Process Cafe for January 2012 were:

1) Silo Thinking and Why it is Bad - A perennial favourite
2) Ten BPM blogs you should be following. This is closely linked to the BPM Blacklist
3) What are your criteria for choosing a BPM tool?
4) Seven ways PI's can make your process worse - a guest post from Bernie Smith at madetomeasurekpis.co.uk.
5) As is vs To Be. One of my personal favourite entries
6) The Two Axioms explained. In which I explain my assertion that there are two key axioms in BPM. 1) There are a large number of process issues that are common amongst most companies, regardless of market sector and line of business. 2) Most people in the company know what their process issues are but don't address them.
7) The Top Ten Tips for Business Process projects
8) Bowling and BPM: All Style and No Skill. In which I equate BPM users to ten pin bowlers.
9) More reasons to document your As Is.
10) Why SaaS pricing will kill BPM in the Cloud

What is interesting looking at some of these is that they are - with minimal exceptions - generally reasonably old posts. The most up-to-date of these is number 4 which was a guest post from January, but the top entry dates right the way back to January 2010.

Thank you, everyone, for reading this blog. I know I don't update it as often as I used to, but the statistics indicate that the original entries I made are the most popular and are the ones that people keep coming back to look at.


Reminder: 'The Perfect Process Project Second Edition' is now available. Don't miss the chance to get this valuable insight into how to make business processes work for you. Click this link and follow the instructions to get this book.


All information is Copyright (C) G Comerford
See related info below

7 Ways KPIs Can Make Your Process Worse

(Today's post is a guest post from Bernie Smith from madetomeasurekpis.co.uk. In it he lists seven ways in which the use of Key Performance Indicators can make your processes worse).


Most people nod vigorously and agree that KPIs and measures are a "good thing". Used sensibly they are, but many organisations are actively undermining their process improvement through the poor use of measures. 

Here are some of the most common pitfalls and some ideas on how to tackle them.


Slide Rule
1. KPIs that show you only part of the picture. 
Even if your measures and KPIs are set up properly, you may be missing a big part of the picture giving a false sense of security - as they may not comprehensively cover the things you really care about. Some of the biggest process improvement opportunities come in the form of things that organisation may not even be aware are problematic. To find this type of opportunity you need a structured approach to mapping out the key drivers on the outcomes you are looking at (I use my Success Mapping approach, you could also use a modified version of TPM PM Analysis or derivative of the BSC approach)

2. KPIs that are WRONG

KPIs and measures can be wrong in a number of ways (and often several ways all at once!). They can be incorrectly defined (I've seen countless organisations define OEE incorrectly), there can be variable definitions within the same organisation, there can be spreadsheet arithmetic errors, and there can also be wide variations in the understanding of what those measures are showing. 

The solution is simple, although it can be demanding: create a KPI database. This can be pretty simple (link to the checklist on my site: http://www.madetomeasurekpis.com/2011/post/tools_to_help/kpi-definition-checklist/) but it is key that there's one clear definition of each measure, with the issues and inaccuracies also recorded and maintained.

3. KPIs that drive the wrong employee behaviour

We have all been rushed off the phone by a call centre employee who is measured on AHT (average handling time) rather than something more meaningful (like total time to problem resolution). People generally behave rationally, it's often the measures that make then do odd things. One way to avoid this is to simply ask your team "What do you find yourself doing to make the numbers that doesn't feel right to you?".

4. KPIs that are out of date

There's an old joke about being able to produce a perfectly accurate 7 day weather forecast, only it takes 14 days to create. Often the analysis produced in organisations is too old to be enable effective decisions. This leads to one of two choices: stop producing the analysis or improve the KPI production process.

5. Sucking up valuable operational resource to produce them

I've worked with teams of 40 or 50, all dedicated to creating reports and dashboards. Some of the work will always require a human, but most won't. The first port of call should be looking to automate much of the tedious Excel legwork that seems to happen in most corporations. Next look at quick-to-implement BI tools like Qlickview and Tableau (there are a couple of first-look reviews http://www.madetomeasurekpis.com/2011/post/first-look-qlickview-10-data-visualisation-software/ and http://www.madetomeasurekpis.com/2011/post/review/first-look-tableau-6-1-data-visualisation-software/). In the long term a consolidated data warehouse can yield benefits, but has a high capital cost and can introduce as many problems as it solves.

6. KPIs as a management club to beat staff up

We've all seen it, measures being used as a club by aggressive managers. There really isn't a KPI fix here, it's all about addressing those behavioural issues with the managers and making it clear that KPIs are not an instrument of torture. But be certain that if you don't address this issue it will shatter peoples enthusiasm for measurement (and management).

7. Drowning process managers in detail - making sensible decisions impossible

Most humans can hold between 4 and 7 "chunks" of data in their mind at once, so how do they cope with 95 page "Risk and Compliance" reports. Put simply, they don't. In this situation they skim through, looking for exceptions and at their "pet" measures. Dashboard and report design is a big area, but this link http://www.madetomeasurekpis.com/2011/post/howto/how-to-build-a-brilliant-dashboard/ gives a few starting tips.

KPIs can be a great force for good, but if you fall into one, or more, of these traps you can miss out on much of the value they can deliver. For more practical advice on tackling some of these problems visit Bernie's site at www.madetomeasurekpis.com




About the author: Bernie has helped his clients deliver surprising levels of improvement across a wide range of industries over the past 15 years. His mission is to help clients with a repeatable, practical and jargon-free method for generating insightful and clear KPIs and management reports. He understands that most people don’t get excited by KPIs, but believes it’s a curable condition..


See related info below

Dead Time... How do you treat yours?

The Roadside Beauty SalonImage by Stuck in Customs via Flickr

Craig over at The Process Ninja has an interesting little post discussing 'Dead Time' in processes. Basically it comes from a study which occurred many years ago where 'fast' workers could only work flat out for a length of time before dropping back and becoming unproductive, whereas 'productive' workers worked constantly at a lower efficiency rate. Overall the 'productive' workers beat the 'fast' workers. The difference between the 100% worker rate and the 'productive' rate is what Craig calls "Dead Time". He gives a couple of, admittedly simple, examples:
...is it worth installing new lifts in a building that are super fast to enable employees to get to their desks quicker? I'd say probably not as this period of time may fall into 'dead time'. Is it worth spending money on a super fast coffee machine in the kitchen? Probably not because people will still stand around and talk to whoever is in the kitchen at the time.
Now this got me thinking about the application of 'dead time' within process management itself. It obviously has applications within aspects such as simulation: Whenever you are working through simulation there is always the temptation to try and load the figures one way or another to affect the results. In a given simulation step you will have metrics values such as 'Total work time' and 'total delay time' as well as 'transit time' and 'queuing time'. Nominally these times are gathered by monitoring existing processes and recording the time for the process steps there. The problem with that is that you can quite easily forget about the dead time in the process. Isn't it easy to watch two or three people who are working a process step and capture the fastest time as a 'best practice' timing? The obvious problem with this is that if this person is not the most productive person you could be falling into the trap of ignoring the 'dead time'

The other thing with 'dead time' is that it affects productivity for some people but can also then start to reduce the productivity of unproductive people even more. Let's follow Craig's example of the super fast coffee machine. If the '30%' worker comes into the kitchen and sees the '100%' worker chatting with someone over a super-fast cappuccino isn't there a chance that he or she will also stop and chat for a while? Suddenly the 30% worker then become a 50% worker. It's a slippery slope.

I honestly don't think we understand - or at least acknowledge - the concept of 'dead time' enough. Fundamentally it has the ability to change the way we view processes - or at least to alter our perception of productivity within a process. It can do this either in a positive or a negative fashion too.

I'm going to do some thinking about the effects of this and get back to you with more about this soon. In the meantime, do you have examples of 'dead-time'?

(You should follow me on Twitter by clicking here)



Reminder: 'The Perfect Process Project' is still available. Don't miss the chance to get this valuable insight into how to make business processes work for you.

Click this link now and follow the instructions to get the book.



For more about me check out my "About Me' page

All information is Copyright (C) G Comerford

5 Take-aways for business process work...

Amber Naslund over at Altitude Branding has produced a post based on an interview she did with Scott Monty from Ford Motor Company. The subject of their discussion was Social media and how Ford are approaching it. Amber came up with 5 takeaways from their discussion. I recommend reading the post alongside this one.

The reason I'm pushing her post is because I can see a large number of parallels between her take-aways and the world of business process management (lower case letters, rather than 'BPM' in upper case)

Let's go through them:




Strategy First.

As Amber says:

The tools don’t matter a fig. They’ll change, ebb, flow, and go away. But you have to approach social media from a holistic viewpoint: how is this going to touch and affect what I’m doing across the board, and what do we want to accomplish? (Don’t forget that goal-setting is part of strategy).

I believe the same can be said for business processes. Yes, you probably need some sort of tool to help you manage your process definition and evolution, and yes, Visio may well be what you end up using (although you know my thoughts on "Visio - the Devil's tool"), but at the end of the day it is the strategy for your process initiatives that is more important.
  • Why are you managing your processes?
  • What do you hope to achieve through doing this?
  • How are you approaching the whole area of governance and capability?
These are the questions that you need to be answering before you can even start to think about the tools.

Individual faces matter.
It is a sad truth today that in many organisations the big command from corporate "Thou shallt follow this diktat" is likely to alienate more people than it converts. It's worth remembering with business process management (and with pretty much any sort of human facing change) that adoption of the change is a human process. Faces matter in this case. You need to put a face at the head of the effort. Someone who is approachable and will listen to what people need to say. Not necessarily someone who will completely kow-tow to whatever is asked, but at least a face that people can talk to.

Business Process requires commitment.
A good business process programme will touch many areas of the business. As such it will require good management buy-in. The benefit of getting the management buy-in is that you can then start to focus on commitment from other parts of the business. I've worked in companies where business process change was pushed through in a bottom up approach rather than a top-down approach. Believe me, the difference is phenomenal and huge. it is much easier to push things forward with the right commitment at the top.

Keep your feet on the ground.
Amber says :
It’s very easy to get swept up in the idea that everyone and every business ought to be using the latest and greatest shiny new tools. But those aren’t always the best, or the most practical, especially considering that most customers are operating in the mainstream and have never heard of some of our more fringe tools ..
This is even more apparent when you come to something like business process management. This tends to work on a 'hype-cycle' basis (see this from Gartner regarding the hype-cycle) - where people tend to get caught up in the fever of what can happen and then expect it to deliver more than it will. The ability to keep one's feet on the ground and link your efforts to a reality rather than a dream are paramount to making things like this work effectively.

Measure based on your goals.
I've written before about the issues with measuring processes. I've also written about Comerford's Three Laws of Metrics. So it's easy to understand why I have an affinity for this particular take-away.

It all comes down to the simple question of "Why are we doing this and can we prove that it is adding value?". If you can't measure whether you are being successful in what you are doing, you can't measure whether this is something that needs to be continued. Nobody wants to be in a situation where you are actually removing value from a value chain, or adding overhead unnecessarily.

Again, as Amber states:
The entire point of measuring is to learn. Analyze how you’ve done against your goals, but don’t stop there. Figure out what’s next. Where to keep fishing, where to cut bait. And don’t discount the anecdotal evidence of what you’re doing. It matters, too.:

Sage words, and ones we would all do well to listen to....

(Photo courtesy of Plindberg. Released under a creative commons attribution licence)

Eyes bigger than your brain...?

James Taylor at Smart (enough) Systems has a great post about Ostriches!

Basically the Ostrich has an eye bigger than it's brain, and James is drawing the analogy between that and managers who want lots of data on a dashboard that they can look at (eyes) but not process (brains)

It all links back to Comerford's Third Law of Metrics which states that "If you're going to measure something at least have a way of feeding it back into the process and affect change" Too many managers and senior executives have fantastic looking dashboards with lots and lots of data on them only to realise that they're either not measuring the right things, or if they are they're not doing anything with that data.

James put's it very succinctly when he say's
.. far too many organizations are obsessed with what they can see in their data and spend far too little time thinking about what to do based on their data.
Couldn't agree more.


For more about me check out my "About Me' page


All information is Copyright (C) G Comerford


Golf and Process: Separated at birth ?


With the start of a spell of decent weather here in the UK I went out and played a bit of golf yesterday (Don't ask about how well I played, it was just practice... Right!)

As I approached the 16th hole (short par four with a huge oak in the middle of the fairway), it struck me how playing golf and managing processes have a lot in common

Bear with me on this.

From the outside golf looks like an easy game. It looks as though little is going on: men and women wondering up and down fields and smacking a ball with a stick. But in fact lots is happening that is not totally visible (understanding the lie of the land, calculating yardage and club selection for example)

Inside the golf game there are parallels between the process project and the round of golf: Each works to a set of rules (although for the project these rules are not always followed correctly!) and each has metrics to decide how well it is doing (Strokes taken, Greens in regulation, Fairways hit, sand saves etc. vs Processes defined, process owners allocated and trained, etc.)

Let's look at the guys who do this for a living (golf, I mean). The Lee Westwood's, Justin Rose's, Tiger Woods, Phil Mickelson's etc. If you look at their stats you will see that they are all pretty good, but there is a huge discrepancy between individual stats.

Take Tiger, for example. Everyone sees him as being a long hitter. But in actual fact he's not even in the PGA tour's top ten hitters (he's actually somewhere in the mid 50's with an average distance of 287 yards). When it comes to accuracy (number of times a drive lands in the fairway) Tiger is way, way down in the list landing only 58% of his drives on the fairway.

In fact looking at his stats overall, the picture is not too promising. He's not the best driver, he's not the most accurate shooter, he's not the best putter, and he's atrocious at sand saves (Getting out of a bunker and putting he next shot in the hole), but he is good where it really counts: Scoring average and money earned. In both those stats he's number 1.

T. Woods STANDARD STATS Rank
Driving Distance 287.7 56th

Driving Accuracy Percentage 58.93% 138th

Greens in Regulation Pct. 73.26% 1st

Putting Average 1.735 8th

Eagles (Holes per) 96.0 2nd

Birdie Average 4.31 2nd

Scoring Average 67.73 1st

Sand Save Percentage 47.62% 120th

Total Driving 194 94th
All-Around Ranking 328 5th
Regular Season FedExCup Points 17,745 1st
-
Money Leaders $4.425 m
1st - -
Par Breakers 25.00% 1st

Putts Per Round 28.63 42nd

GIR Pct. - Fairway Bunker 80.0% 2nd - -
Stats courtesy of PGATOUR.com

So what has this got to do with process, you may ask!

Well, anyone who has watched Tiger play will know that he has a set routine for every shot. He plays every shot with the same level of determination and preparation. In other words he has a process that he follows for every shot. He also has a process he follows for practice on the range. He has a process he follows when he is out playing practice rounds. He has a process he follows when on the practice putting green. Every part of his game has a particular set of rules, inputs and outputs which guarantee the best possible outcome.

The interesting thing here is that Tiger's processes in detail are obviously not the best (for example if he's 138th on tour in driving accuracy it means that the process for managing direction in his shot's is not optimised. For that he should look at someone like Olin Browne who has the best driving accuracy on tour, hitting almost 8 out of ten fairways), but overall, Tiger's processes get him where he needs to be: Number 1 in the golf world.

Now let's look at another major player: Justin Rose

J Rose. STANDARD STATS Rank
Driving Distance 280.8 109th

Driving Accuracy Percentage 64.48% 70th

Greens in Regulation Pct. 63.49% 76th

Putting Average 1.858 184th

Eagles (Holes per) 378.0 111th

Birdie Average 2.62 184th

Scoring Average 70.85 74th

Sand Save Percentage 64.52% 7th

Total Driving 179 73rd
All-Around Ranking 815 108th
Regular Season FedExCup Points 1,416 109th
-
Money Leaders $331k
111th - -
Par Breakers 14.81% 183rd

Putts Per Round 29.76 155th

GIR Pct. - Fairway Bunker 61.5% 16th - -
Stats courtesy of PGATOUR.com

This tells us that his driving accuracy is way better than Tiger's, his sand saves are way better than Tiger's, his total driving is way better than Tiger's and yet, when it comes to the places that count - Scoring average and money earned - he is way behind Tiger. Again I think this is down to process. Justin obviously has processes in place the same as Tiger but it is possible that they are not working the same for him as they are for the world number one.

Let me see if I can show you an analogy.

If I was running a call centre which was measured on throughput of customer calls I could easily design a process that allowed for all calls to last a maximum of 2 minutes each. Operators would be trained to either solve the issue immediately or drop the caller back to a queue to be dealt with by someone who is better suited to answer the question. When the stats at the end of the month appear it means they have met their objective of each answered call lasting 2 minutes or less. However, the customer is not happy because they have been shuttled back and forth to different people who couldn't answer their question.

The problem is that folks are measuring the wrong things in the process. Tiger and his team are obviously not too concerned about the fact that he is wildly inaccurate off the tee because they know that his ability to land the second ball on the green is the best on tour (literally). This means they have identified the right things to measure and are measuring them appropriately. (Remember Comerfords Third Rule of Metrics: "If you are going to measure something at least find a way of feeding this back into the process to affect change"). Justin Rose's coaches are looking at his stats and trying to improve all of them, thereby ensuring that none of them are getting any better. Justin drives it just 7 yards less than Tiger, lands it in the fairway 5 percent more often but misses the green on the next shot 10% more than Tiger. This makes him 76th in the stats list rather than first. This is what's causing the issue. Once they are both on the green, Tiger only holes one more putt in ten than Justin does, but because he's on the green more often than Justin it means his scores are lower: Almost three shots per round lower. Couple that with four rounds per tournament and there is a 12 shot difference between the two players. That's enough to put Tiger at the top and Justin down in 108th place.

So the question I pose to you is : 'Are you trying to optimise all your process at the expense of knowing where the main benefit needs to be?' Couldn't we all benefit from knowing that even though some of our processes are suboptimal, we have identified they key ones and made sure those are working well?

(C) Process Cafe 2008
(Photo courtesy of Guiri R. Reyes)

Process Maturity: Is Gartner wrong?


Nick Malik on the Inside Architecture blog posts a musing on the nature of Process Maturity. His contention is that the Gartner Maturity model is predicated on the fact that you need to measure where you are in order to get from one level to the next. "This makes sense", you think.

But Nick is saying "What's the business driver to measure processes capability when the reason we're doing this is to improve our business?" Surely a company should focus on improving how they do things rather than focusing on measuring how good they are at managing processes? Once they start to get improvement in their business process they should start to look at how well they are managing their processes. This will then lead them onto the next maturity level etc. etc. etc.

Good contention, Nick. I'm with you for a large part of the way. I think where the argument falls down is in the details. Sure, I can give Visio ("The Devils Tool") to a bunch of users and get them looking at how they do things with a view to making them better, but in the long term is this the best way to build a process management capability?

In my mind the Maturity Model is linked in with the level of sophistication a business has in the capability of process modeling. If I give the wrong tools to the wrong users who use the wrong methodologies then my processes aren't going to get much better. This is where a maturity model comes in.

Having said that I am 100% behind Nick when he says "The only thing more dangerous than measuring nothing: measuring the wrong thing" I wholeheartedly agree and refer you to Comerford's Three Laws of Metrics as examples.

Back in the days when I ran a European business process shop for a US Multinational, they seemed to spend an inordinate amount of time trying to benchmark themselves against other folks. This used a great deal of the Maturity Model concept. But like Nick says it missed the fundamental point of 'Are we actually doing any process modeling that is adding value to the business?' At that point the answer was 'probably not'. I'm not sure where that organisation is now, but I suspect they are still as concerned about knowing where they fit against competitors than how well their processes actually work.

An interesting read. Well worth a few moments of your time.

(Photo courtesy Bettyspics)

Technorati Tags: , , ,

Forrester Reports on BPM



Our good friends at Forrester have released a free report called 'The EA View: BPM has become Mainstream" by Ken Vollmer. It is a wide ranging but quite detailed report summarising research and surveys carried out at the back end of last year.

It's worth a read if you have the time.

However, it does fall prey to a couple of the things that are personal bugbears for me when it comes to processes and BPM

1) It surveys what people are currently doing rather than what is the right way to do things. In an environment where many different people are doing things different ways, and in a maturing market, there will always be different ways of doing things, some right, some wrong. A report like this does not include any recommendations on which is the most effective way of doing things, rather what current people are doing. How am I supposed to know if I'm doing things the best way in that case? For example it mentions that the top two most widely used BPM vendors are IBM and Microsoft. Now the last time I looked, both of these companies where in the operating system/ software business and one of them was in the hardware business. Are they really the best companies to use for BPM work? Maybe the are (and this is said with no prejudice at all towards these two companies), but just because they are the biggest doesn't necessarily mean they are the right ones to use. (Interestingly it was mentioned that on average each respondent to the survey was using 3.5 BPM vendors. Another interesting statistic)

2) It talks about metrics giving details of which metrics are used to measure BPM success. From my post on Comerford's Three Laws of Metrics you will no doubt remember that I said (amongst other things) "Metrics for the sake of metrics are a waste of time". Looking at the Forrester list I see that the top 4 items were Process cycle times, customer satisfaction, risk reduction and process error rates. Of these four I, personally, feel that process error rates and cycle times are metrics for the sake of metrics. Why would you want to measure the fact that your process is producing a number of errors? Does it matter? (Of course it matters, but the question is what percentage of the processes are producing errors? That's a more useful measure). Cycle time is also another misleading entry. Why would you measure the fact that your process cycle is now 45 minutes? So what? A more effective measure would be cycle time reduction. 'My cycle is now 70% quicker than previously'. That adds value and that's the sort of thing that would look good on a management dashboard. Interestingly enough 5% of major companies over $1b revenue were not measuring their BPM success

As I mentioned earlier, this is worth a few minutes of your time to read and digest.

But as with all 'survey based findings' take the results with a pinch of salt.

Comerford's Three Laws of Metrics


I work a lot with organisations that look at process and follow the standard company line

"Once we have our process in place we need to ensure we measure it. Let's put some metrics around it"

"Excellent", I think "They've got their act together and know what they mean."

However, when it comes to the implementation of the metrics they don't seem to focus on the right things. I've had companies looking at submission processes (i.e. a process whereby something is submitted for review and approval) where they want to capture a metric for 'Number of things submitted'. I've had companies looking at processes to manage performance where they weren't looking at the actual performance, they were looking at the ability of someone to meet an arbitary performance deadline so they can say "I met this compliance metric"

Through all of this I have to ask myself why the company feels they wish to collect metrics. To distill it all down I use "Comerford's Three Laws of Metrics" to help focus thinking

1) Metrics for the sake of metrics are a waste of time: Essentially if you are gathering data about a process because someone said it's a good idea to gather this data then you are wasting your time. I don't care if you massage it, re-format it and stick it on an executives dashboard, if you're just doing it to show figures you might as well make the data up. It's more important in that case to gather some meaningful data about the process. So you managed to deal with 35 documents this month in your approval process. So what? What does this tell you about the process? It tells you that 35 documents went through it. How many of these were approved? How many rejected? What was the capacity of the process (in other words is 35 documents a lot for this process or a little)? These are the kinds of things you need to track

2) A metric which says 'I said I was going to do it and I did it' is also a waste of time: I worked with an organisation that had a performance management process which measured you on your ability to produce a certain document by a certain date. If you had your objectives completed and signed off by Feb 21st you got a little star, an 'Attaboy' and - more importantly - something that contributed towards your pay review at the end of the year. However at no point in this process was there a measure of the quality of the objectives, or even the effectiveness of the objectives. All they were concerned about from a process point of view was 'Did we complete what we said we were going to do when we said we were going to do it?'

3) If you are going to gather metrics, at least have a way of feeding them back into the process to effect change: This is, essentially, the key part of the three laws. If you are going to the trouble of actually gathering data, tracking it and reporting it, where is the part of your metrics gathering process that feeds that data back into the process and permits a change? Going back to our submission process: We have 35 documents going through, each document takes 2 days to process, 20% are rejected, 70% are approved and 10% are re-worked and resubmitted prior to a decision. This starts to become meaningful data, but if we also tracked data such as "%age of resource time spent working on processing" and "%age of processing time spent awaiting decision" we have some key data to help change the process. If we found that it is only 8% of a resource time to process a document , it means that either we have more capacity than we need for dealing with these documents, or we can, alternatively, increase the throughput of documents. However if 95% of the processing time of the document is waiting for approval then we need to feed this back to the process to understand why we have a bottleneck: Too few approval resources? Inappropriate allocation of time for approval? Technical issues in the approval process? All of these can work to feed back into the process and effect change

How many of these seem familiar to you?

Digg!