Tuesday, May 19, 2009
Definition of a World Class Call Center? Not as bad.
While WOW is good, I felt the discussion did not focus enough on accuracy. In a kind of Maslow's Heirarchy of Needs, I believe a center would be better served getting the basics right (right steps to ID the customer, right diagnostic steps, right disclosures, right offer, right price, right system updates, etc) before worrying about agent warmth and empathy and any kind of WOW factor.
But go into centers and ask to see a chart of error rates on some of those dimensions listed above. They don't exist. ( http://ifyouwanttoscream.blogspot.com/2009/05/benchmark-error-rates-for-contact.html ). A call center leader for a division of a high tech company was lamenting that they could not get their thousands of outsourced agents to 1) consistently diagnose a high volume tech support call correctly, 2) they couldn't get those agents to consistently check the warranty to avoid unauthorized returns, and 3) they could not get the agents to remind the customer to remove software before returning the unit since they would never see it again, which resulted in angry letters about missing software to the Office of the CEO. Did they or their outsourcer track any of these sub-process error rates? They did not. (See Call Center Hidden Factories http://www.nationalcallcenters.org/pubs/In_Queue/vol3no7.html#Call_Center_Hidden_Factories )
Not only aren't error rates tracked, but the very process of making process changes...which happen constantly in call centers...is a complete joke (see Inside Jokes essay http://www.nationalcallcenters.org/pubs/In_Queue/vol3no15.html ). As an example here, a client of ours went to the Philippines to listen to their outsourcer's agents take calls. In ten out of 10 calls they listened to, the agents gave the wrong price for a service that recently changed. Our client asked the management, how they communicated the change and they said what any call center management team would say: we had team meetings, we sent out emails, we did chair-drops, we monitored some phone calls and did some coaching. We have to stop kidding ourselves about our ability to change agent behavior in any kind of timely fashion (see Wag the Dog: Why are we letting agent traits control call center outputs? http://ifyouwanttoscream.blogspot.com/2009/05/wag-dog-why-are-we-letting-agent-traits.html )
I am not sure what the call center definition of World Class is, but I do know this: whether outsourced or in-house, call centers make too many errors every day (see Do Call Centers Need to Carry Malpractice Insurance http://www.nationalcallcenters.org/pubs/In_Queue/vol2no24.html ). This is not just a few aberrant centers. The high error rates we observe are actually endemic to the call center service delivery model that most centers follow...unaided humans are not very reliable to begin with (3 Sigma at best), let alone the mostly young, entry level, low-paid employees that we staff our centers with, who (in normal economic times) are turning over at a stratoshperic rate because the jobs, in general, are stressful, who, during their short stays, don't get enough training, only occassional monitoring and even less coaching. (As an aside, coaching is the go-to method for improving call centers, but this go-to method is of questionable ROI given the high turnover (see The Futility of Call Center Coaching http://www.isixsigma.com/library/content/c080331a.asp )). With this as the typical Call Center M.O., is it any wonder we observe the quality problems that we do?
Until centers figure out how to support the intelligence and empathy of humans with the reliability of automation and blend the two together seamlessly, the high error rates in contact centers will continue and world class will only mean "not as bad."
Friday, May 15, 2009
Wag the Dog: Why are we Letting Agent Traits Control Call Center Outputs?
I stumbled on a piece of research about how agent traits affect output measures of performance (
Here is my high-level summary of the results of the study…if you use measures of conscientiousness to screen/hire, it will, in general, improve your center-wide quality scores. However, when the agents start to get burned-out, (because of the fact that you hired agents that were more conscientious) your productivity will be even more sharply reduced.
There are however broader implications from this study. The paper highlights how agent conscientiousness and agent burnout affect performance. Well, raw intelligence affects performance and degree of domain specific content knowledge affects performance and distractibility affects performance and personality affects performance and "thickness of accent" affects performance and mood affects performance and motivation affects performance and on and on and on.
Now of course there is nothing wrong with studying employee traits to find out the ones that have the biggest effect on performance and then using that information to design selection tests to try to raise the level of performance in your centers by raising the presence of that trait. This approach has an unassailable track record of success (see Take the Guesswork Out of Hiring) and this approach has been the bread and butter of Industrial Psychology consulting firms large (see Personnel Decisions) and small (see All About Performance) for decades.
But the bigger question is this: why are call center leaders leaving their outputs at the mercy of so many variables they can’t control? And the industry’s attempt to deal with the challenge…to attack the endless drivers of agent variation (motivation, knowledge, conscientiousness, mood, intelligence, etc) with one-off efforts...a new selection test here, a rah-rah team meeting there, free pizza and doughnuts, occasional coaching sessions...is a fool’s errand at best.
Agent output metrics in the call center industry will be permanently hog-tied at an embarrassingly low level until we can figure out a cost effective way to reduce the effects of between-agent variation. Selection tests help reduce this variation, but they are not enough. Standardizing large swaths of our agents’ process using agent-assisted automation is not only the most effective and cost-efficient approach, it is the only sane solution I have seen to date.
Tuesday, May 12, 2009
Benchmark Error Rates for Contact Centers
A question was recently posted to a Linked-in User group about standards and benchmarks for call center error rates. I penned a response along these lines.
Standards
First, the direct answer to the question is there is no goal, standard, or target for acceptable error rates in call centers. Acceptable error rates in call centers seem to be a function of what the agents are doing and what the consequences of an error are. (For more on this, see Does the Call Center Industry Need Malpractice Insurance?)
Let's consider a situation in which we change the price for a service and we decide to check the agents' accuracy in giving the new price. On the day after the price was changed, there is no way the agents will quote the right price 100% of the time. What would be the acceptable error rate? 75%? 80% Would 45% be OK? What would be acceptable two months after the price change? 90%? 95%? If the agents get this wrong it is unfortunate, but not the end of the world and most call center leaders seem willing to tolerate mediocre performance around process changes. (For more on the sloppy process changes in call centers, see Inside Jokes)
Tracking Error Rates
The question about an acceptable error rate for call centers begs another issue: tracking error rates. Again, consider the example above, how many call centers would even monitor the error rates around a pricing change say a day, a week, or a month after the process change was made? If you record every call, you can use speech analytics software to "listen" to the calls and calculate an error rate. This is an expensive solution and not widely deployed.
For most centers, the only way to do it is to dedicate someone to listen to 50 calls and estimate the center-wide quality rate from the sample. Few do this. Processes are changing all the time in call centers. You would have to have a monitoring team almost the size of your agent population to monitor agents and track the error rates on all the process changes.
So no one is really Tracking Error rates except on the most egregious, costly errors. Because no one is tracking error rates, call centers commit a lot of them.
Driving Improvements
Once you determine your error rate, driving improvements is not easy. Call monitoring is the same as inspecting in quality in manufacturing, a practice manufacturing abandoned a long time ago: What the Call Center Industry Can Learn from Manufacturing: Part II The only way monitoring can drive increased compliance is if you monitor almost every call, publicly track error rates, and dismiss agents below 95%. This is a lot of work in and of itself and it would result in a lot of expensive turnover.
Monday, May 4, 2009
Mass Customization and the Transformation of the Call Center Industry
- Don’t automate the greeting to ensure it is branded correctly every time; hope the agents aren’t so tired and bored that they mess it up.
- Don’t prerecord the disclosures; hope the agents read it word-for-word without accent issues interfering with customers’ understanding.
- Don’t use technology to ensure the right cross-sell offer is made at the right time every time; hope your fancy variable comp plan counteracts the unyielding pressure we put on the agents to reduce their talk time.
- Don’t error-proof the step reminding the customer to “remove any software before returning the unit” so that it can’t be skipped; besides, the angry letters and calls from customers to the CEO about their missing software go to another department.