There are two diametrically opposed answers to the question posed in the title. Here is the first one: a jaw-dropping level of contact center errors is completely acceptable, even when those errors involve breaking the law. "Preposterous!" you say. Please keep reading.
First the big picture: in contact centers no one talks about Six Sigma or Five-9s, or Taguchi's "on target with minimum variation." Those ideas are constantly being discussed in manufacturing, but are risible notions in call centers. Variation is everywhere...call volume, call arrival, call types, agent process variation, agent output variation. Rather than try to measure it and reduce it, most call center leaders are just completely defeated by it.
"OK, we don't try to reduce variation. So what?" you say. "Why don't we just go from center to center, get their error rate and call the average acceptable?" You wouldn't even be able to do that because no one measures and tracks their error rate. Why? Because they don't want to know! Next time you get an email from someone pitching call center KPI offerings or benchmarks, open it up and see if "error rate" is one of the metrics they suggest, offer or track. It ain't in there.
In manufacturing, specs, i.e., what constitutes correct, are sine qua non and performance against those specs is constantly measured. But for some reason, contact centers rarely define what a correct call is...what the agents need to do in their systems and say to customers...by call type...and even less often, measure performance against those standards, weight that performance by the volume of calls and/or track that performance over time. It is absolutely astonishing and no one is talking about it.
As part of our work in this regard, we go into centers and we get agents, trainers, monitors, supervisors, etc together to help us map out how a call is supposed to go. Inevitably, at some point, a food fight breaks out with the various groups arguing over how a call is supposed to be handled. Obviously, if you haven't even bothered to define what correct is and your team is confused about the standards, "incorrect" has to be happening all the time.
Consider a quotidian price change for a service where we decide to check the agents' accuracy in giving the new price. Hate to break it to you, but on the day after the price is changed, there is no way all the agents will quote the right price 100% of the time. They have memorized the prices and disclosures and probably didn't read the email you sent out or the note on their chair that you left. So then what would be an acceptable level of correct performance on a price quote? 75%? 80%? Would 45% be OK? What would be acceptable two months after the price change?
We know of one consumer electronics company, one of the biggest technology companies in the world, that listened to 10 out of 10 of their outsourcer's agents give the old price for a service. The outsourcer didn't even know their agents were making so many mistakes. The client, of course, was none-to-happy, but the outsourcer didn't get fired. De facto, the outsourcer's performance was acceptable. (For more on the sloppy process changes in call centers and the flagrant errors that occur for months, see Inside Jokes: What process changes in call centers and lost house pets in Tucson Arizona have in common)
I know what you are thinking. A price change? Come on! What's the big deal? If the agents get this wrong it is unfortunate, but not the end of the world.
OK, then what would be an acceptable error rate on, say, debt collection calls which, in the US, are regulated by the Fair Debt Collections Practices Act (FDCPA)?
According to the FDCPA, debt collectors are required to disclose to the debtor 1) they are calling from a debt collections agency and 2) their mini-Miranda rights ("...anything you say can be used to help collect this debt."). Failure to disclose could result in lost collections and stiff fines against the agency. Here we might need to be a little better...how about 90%? Would 85% be OK? We work with multiple collections agencies and their performance on just these two disclosures (prior to deploying our software of course!) is highly variable and all are less than 90%, despite the fact that it is a law!
You are thinking that is one highly specialized example and you are not convinced there is a problem here, are you? Before using our software, one financial services center we worked with averaged 88% on legally required disclosures when they measured it and once got as high as 92%. This was their performance for years!
Since it is happening all the time and not improving in most centers that bring us in, you have to conclude that breaking the law 10-15% or more of the time is acceptable. You can argue it is not acceptable, but when a problem exists for years with no change in tactics or results, it is, de facto, acceptable.
Let's talk a little more about defining correct. I just described a call type where there was a legal definition of what constituted correct. But that isn't the only thing that makes a call correct. There are lots of things to get right on phone calls and it changes by call type, which is why I keep harping on measuring correct by call type . Some calls have required consumer protection disclosures...this is huge in collections as mentioned but also in health care and financial services (financial service disclosures are becoming a huge issue is the US with the advent of the new Consumer Protection Bureau). Sometimes these are required by law, but sometimes the company is delivering the disclosures to limit its own legal liability or to limit repeat calls or calls to another department. This is a shareholder definition of correct.
Sometimes doing the right cross-sell based on the product and the customer profile is what constitutes "correct," and not doing it means a company loses revenue. Again, the shareholders say correct means doing the right cross-sell every time.
On other calls, it is the customer who hopes you know what correct is and who is counting on you to get it right. The right price. The right address for returning a product. The right diagnostic steps taken to troubleshoot their issue and get them up and running again.
Who is tracking "correct," by call type, from these various stakeholder perspectives...not just on a monitoring form for a single agent...but across agents. Beyond the calls with legally required disclosures, I would bet few, if any.
The question about an acceptable error rate for call centers begs another issue: not just measuring it but tracking those error rates over time so 1) they can be improved, 2) so we can make sure our improvement strategies are actually working, and 3) so we can make sure we are getting a return on investment for those improvement initiatives.
For most centers, the only way to determine the error rate, just for a point in time, is to dedicate a group to listen to 50 or 100 calls with clearly defined Required Call Components (RCCs) and estimate the center-wide quality rate from the sample. This alone is a lot of work. To track this error rate over time, you would have to repeat the process every day or every week. Fat chance. And if you ever tried to do this, by call type, you would end up with a monitoring team larger than the size of your agent population.
So where does this leave us? Ask a center leader, just for the most frequent call type he/she gets, what is the error rate (performance against legal, shareholder and customer RCCs)? And, is the error-rate over the last year on that one call type getting better, getting worse or treading water? They won't be able to tell you.
Let that sink in. They haven't defined correct, they aren't tracking correct performance over time, and they aren't doing anything differently to increase the percentage of correct calls. Management won't say it with their "outside voice," but I hope now you understand the answer given in the first paragraph to the question posed in the title: a jaw-dropping level of contact center errors is completely acceptable, even when those errors involve breaking the law.
Lowering the Error-Rate Once You Know It
Should you decide to wade into this murky water and try to determine the error rate for some call types, the number you come out with will likely not be too flattering. You may find yourself motivated to try to lower that error rate. You have a couple options. One is terrible, hasn't worked, and will never work and one works perfectly every time. Guess which one Call Centers use?
Call monitoring is the same as trying to "inspect in" quality in manufacturing, a practice manufacturing abandoned a long time ago (see What the Call Center Industry Can Learn from Manufacturing: Part II). The only way monitoring can drive increased compliance is if you monitor almost every call, publicly track error rates, and dismiss agents statistically worse. This is a lot of work in and of itself. It would result in a lot of expensive turnover. And it would only have a slight impact on your average error rate.
Sadly, this is the go-to method for improving agent output metrics. In my view, it is the reliance on monitoring and coaching that is huge contributor to the ridiculous amount of errors made in call centers every day. In all the examples given in this post, every single one of these centers had extensive monitoring and coaching programs. Do you think they weren't doing it right? What sane person can argue that more monitoring and coaching will solve this problem when it hasn't solved it in 40 years? (For a full discussion on why one-agent-at-a-time Monitoring and Coaching can never improve error rates or other agent output metrics, see this discussion Call Center Coaching Remains A Labor in Vain.)
Instead of paying for a bunch of monitors to act like cops with radar guns trying to catch people doing it wrong, why not just make it easy for the agents to do it correctly...every time?
Stealing a page from manufacturing's playbook, centers can use error-proofing to make it impossible for agents to skip key steps. Desk-top consolidation and agent-assisted automation are the best practices here and with these approaches, error-free quality is easily achievable. (See Fixing Between Agent Variation Can Make All the Difference and Agent-assisted Automation.)
For example, disclosures are pre-recorded and integrated into the CRM so that the agent cannot complete the call until the information is "read" to the customer. In the case of the collection calls mentioned earlier, once the debtor was on the line, the agent could not open the record and begin to discuss the debt until the two legally required disclosures were provided to the customer using the pre-recorded audio. Once the software signaled the CRM that the messages had been played, the record opened up and the collector could see how much was owed and could discuss options with the customer. Legal disclosures at the end of financial services and health care calls work the same way...the call cannot be completed and the order cannot be submitted until the software signal the CRM that the required information has been played to the customer.
Think the customers wouldn't like this? Think again. We have experience with thousands and thousands of agents handling hundreds of millions of phone calls and the customers rarely even comment, let alone complain. In the rare instance a customer does comment, the agent says something to the effect that "I am using software to make sure the call is 100% correct and easy to understand. Is that OK?" Customers are delighted by that.
Think the agents wouldn't like this? Think again. They hate having to read the same information over and over again....80 calls a day...five days a week. This approach gives them a chance to rest and do some of the After Call Work and lets them worry a little less about having to get everything right. The boredom and repetition and stress is one of the reasons the turnover is so high in call centers. (See Why Your Turnover Reduction Efforts are Not Working.) Letting the agents use automation is one of the most anodynic tools that has ever been implemented in call centers.
Some days it seems as if there are an overwhelming number of problems in this world. So many you almost hate to turn on the news. But you know what? Polio isn't one of them (though sadly it is making a comeback (Polio's Return after near Eradication Prompts a Global Health Warning). It used to be a huge problem until they invented a vaccine. Asking what is is an acceptable number of polio cases in the world and making excuses for the cases you do have makes no sense, because the answer is that since polio is completely preventable, there shouldn't be any cases.
Arguing and worrying about what level of contact center agent errors we should tolerate also makes no sense because there is a way to deliver error-free performance every time.
An alternative answer then to the question posed in the title? Zero.