AI in customer support isn't the problem. Rushing it is
45 years in support, an AI believer, and a warning: we're breaking the one thing we can't automate back.
I’ve spent more than 45 years in customer and technical support.
I’ve been on the calls when payroll didn’t process the night before. I’ve sat with customers who were stressed, frustrated, and sometimes in tears because something critical broke at the worst possible time.
And now? I use AI every single day. I believe in it. I see the future.
But what I’m watching companies try to do right now with AI in customer support—especially in call centers—is moving faster than reality can support. And that’s where things start to break.
AI is powerful. It’s not magic.
Let’s be honest.
AI is great at handling repetitive questions, quickly surfacing knowledge, and supporting agents behind the scenes. Implemented well, it can absolutely transform support.
But here’s what’s getting missed: AI doesn’t understand your business. It reflects what it’s been trained on. If your documentation is messy, outdated, or incomplete, AI doesn’t fix that.
It scales it.
You don’t get empathy out of the box
You can train tone. You can guide language. You can simulate care.
But real empathy—the kind that builds trust in a stressful moment—still comes from human experience. And the research is catching up to what anyone who’s worked in support already knows.
SurveyMonkey found that just 8% of customers prefer AI to human agents in customer service, while 90% prefer human agents because they better understand their needs. Recent surveys now rank interacting with AI chatbots among the most frustrating parts of contacting a business, second only to being placed on hold.
The pattern is clear: customers still want humans when it matters.
The myth of the “95% AI call center”
There’s a number that shows up in almost every AI customer service pitch: “By 2025, 95% of customer interactions will be AI-powered.”
That stat traces back to a 2017 prediction from Servion Global Solutions, which has been recycled ever since. And here’s the problem: “AI-powered” gets interpreted as “fully handled by AI,” when what it really means is that AI touches the interaction in some way—routing the call, suggesting an answer, summarizing the ticket, helping the agent draft a response.
That nuance gets lost. Executives hear “95% AI” and start setting targets that aren’t grounded in how any of this actually works.
Here’s what credible, current research actually says:
Gartner predicts that by 2029, agentic AI will autonomously resolve about 80% of common customer service issues—not all issues. Common ones. And by 2029, not next quarter.
Zendesk’s 2025 CX Trends report found roughly three out of four CX leaders expect around 80% of interactions to be resolved without a human agent in the next few years. Expectation, not reality.
Real deployments at large companies are landing in the mid-40s to around 50% containment for routine queries, with the rest still going to humans. And those results are considered good.
Read that again: the wins are landing in the 45–50% range. Even the best, well-executed implementations still send more than half of tickets to a human.
So when you hear a company announcing “95% AI customer service,” one of three things is happening. They’re using a generous definition of “AI-powered.” They’re chasing a number that doesn’t exist in the real world. Or they’re about to learn the hard way.
Because not all support is created equal.
There’s a big difference between “How do I reset my password?” and “My payroll didn’t run, and I have employees expecting checks tomorrow.” One is transactional. The other is emotional, urgent, and high-stakes.
In those moments, customers don’t want a chatbot. They want a human who understands what’s at risk.
The realistic, sustainable target that current research supports is around 70–80% automation of truly routine queries, with the remaining 20–30% handled by humans. Anyone aiming much higher is either redefining “AI-powered” or about to repeat someone else’s painful lessons.
This isn’t AI failing. This is leadership rushing.
What AI still can’t do
Even the best AI today has real limitations:
Ambiguous problems
Complex configurations
Edge cases that aren’t clearly documented
Cross-product issues
Emotional nuance
Real empathy in the middle of a crisis
Those are still human strengths.
And then there’s hallucination
Here’s the part that doesn’t get talked about enough—and the one I ran into just this morning.
AI hallucinates. It will give you a confident, polished, completely wrong answer and serve it up like it just handed you the truth on a silver platter.
And it can do this even when you attach a reference document and tell it, “Use this. Only this.”
I had it happen today. I gave the model a source document. I asked a specific question. It pulled an answer that wasn’t in the document, dressed it up nicely, and presented it as fact. If I hadn’t already known the right answer, I would have walked away with bad information and never questioned it.
That’s me—someone who uses AI all day, every day, and knows to double-check.
Now imagine the same dynamic in a customer support center. A customer asks about a refund policy. The AI confidently invents one. Or misstates a warranty term. Or fabricates a feature that doesn’t exist. The customer takes that answer at face value, makes a decision based on it, and the company is now on the hook for whatever the AI said.
This is not theoretical. In Moffatt v. Air Canada, a Canadian tribunal held the airline liable after its chatbot gave misleading information about bereavement fares. Air Canada argued the chatbot was a separate entity. The tribunal wasn’t buying it.
And here’s the part that surprises many leaders: giving AI your documentation does not eliminate this risk. The technical term for connecting models to internal content is retrieval-augmented generation (RAG), and it absolutely helps. But it’s not a fix. AI can still synthesize two correct documents into one wrong answer. It can pull the wrong passage. It can fill a gap with something that sounds right but isn’t.
Evaluations of grounded AI systems show hallucination rates ranging from a few percent up into the low 20s, depending on the setup. Domain-specific legal AI tools have hallucinated at rates in the high teens to low 30s on certain tasks.
In a payroll emergency, a benefits question, or a billing dispute, those error rates aren’t acceptable.
A human who doesn’t know the answer says, “Let me find out.” By default, AI says *something*. And in customer support, “something” can be more dangerous than “I don’t know.”
This is fixable. It takes guardrails, validation layers, confidence thresholds, and a clear path to a human the moment the AI isn’t sure. It takes process, not just a prompt.
None of that exists in a “95% AI call center” built on a tight timeline and a smaller payroll. It exists in an AI program built thoughtfully—by people who understand both the technology and what’s at stake when it gets it wrong.
Let’s name what’s actually happening
A lot of these “AI transformations” aren’t really about AI.
They’re about private equity firms and finance leaders looking at a balance sheet, seeing a big customer support line item, and deciding that AI is the cheapest way to shrink it. The AI story is the press release. The spreadsheet is the strategy.
That’s not responsible AI adoption.
That’s cost-cutting wearing an AI costume.
Forrester Research, reported in outlets like HR Executive and The Register, found that roughly 55% of employers regret layoffs they attributed to AI, and about half of those roles get quietly rehired, often offshore or at lower pay. That’s not transformation. That’s arbitrage dressed up in better language.
And it’s giving AI a bad name.
People who actually love this technology—people like me—are watching this play out and getting frustrated. Because every customer is stuck in a chatbot loop, every payroll that doesn’t process while a bot asks them to rephrase the question, every employee training their replacement on the way out, adds to a growing narrative that AI can’t be trusted.
AI can be trusted. The humans rushing it cannot always be.
Nobody is putting the human cost on the spreadsheet
Before we get to the framework, let’s talk about the part that’s easy to skip in a LinkedIn post.
When a company announces it’s moving to “AI-first support,” there are people on the other side of that sentence. People with mortgages. Kids in college. Parents to care for. People who spent years—sometimes decades—building the exact expertise the company is now trying to compress into a model in a matter of months.
Some of them are being asked to train their replacements on the way out.
Let that sit for a minute.
And the ones who stay? They carry more. More workload, more guilt, more anxiety about when they’ll be next. Psychological safety erodes. Engagement tanks. Institutional knowledge walks out the door with every layoff, and you don’t get it back.
You can’t automate your way out of that cost. It shows up in your culture, in your Glassdoor reviews, in the quality of service your customers eventually receive. It shows up in the people who stay and stop bringing their best, because—honestly—why would they?
If you’re a leader reading this: the people doing this work aren’t a line item. They’re the reason your customers trust you in the first place.
A better way forward
AI should enhance support. It should not replace the human connection that makes support work in the first place.
Here’s a model that actually holds up in the real world.
Phase 1: Stabilize your foundation. Before AI touches anything, clean up your knowledge base, identify your most common support issues, and standardize responses. If your foundation isn’t solid, AI will amplify the cracks.
Phase 2: Augment your team. Use AI to support your people. Let it draft responses, surface answers faster, and take the first pass. Let agents review, personalize, and stay in control. This builds trust internally before you ever expose customers to it. A Harvard Business School study of more than 250,000 chat conversations found that AI assistance helped human agents respond about 20% faster—and with more empathy and thoroughness, especially for less-experienced agents.
Phase 3: Automate carefully. Only automate what’s repetitive, well documented, and low-risk. Not everything should be automated, and that’s okay. Beyond a realistic ceiling for routine queries, you’re crossing into territory where human judgment and empathy matter most.
Phase 4: Keep humans in the loop. Always provide an easy path to a human. Monitor AI responses for accuracy. Improve continuously based on real feedback from both customers and frontline staff. AI should never be a dead end.
What leaders need to do right now
This isn’t just a technology decision. It’s a leadership decision.
If you’re a CEO or executive, stop chasing AI headlines and start asking better questions. Not “What can we save?” but “What experience are we creating?” Invest in your knowledge systems before you invest in automation. Measure trust and customer satisfaction, not just cost reduction. If your AI business case falls apart the moment you factor in quality, turnover, and the eventual rehiring bill, it was never a real business case.
If you lead operations or support, map your support tickets. Figure out what’s actually repetitive versus what requires judgment. Build clear escalation paths. Train your teams to work with AI, not fear it. Fear does not produce good service.
If you’re on the frontline, you are still the trust builders. Use AI as a tool, not a replacement for your judgment. Speak up when something doesn’t feel right. Keep bringing the human side—empathy, reassurance, clarity. That’s the part no model has figured out yet.
Final thought
AI isn’t the problem.
The problem is that when we move faster than our systems, our people, and our customers are ready for, we end up leaving people behind. When we treat AI as a cost-cutting tool instead of a capability worth building well. When we forget that the people we’re replacing spent years earning the trust the company now takes for granted.
Get this right, and AI can elevate customer support in ways we’ve never seen before.
Rush it? We risk breaking the very thing that makes support work in the first place.
Trust.
And trust is still—and always will be—human.
References
SurveyMonkey, Customer Service Statistics 2026: Humans vs AI Trends.
GrooveHQ, 55 AI Customer Support Statistics for 2026.
Lorikeet CX, AI Customer Service Statistics: 30 Data Points for 2026.
Gartner, Agentic AI Will Autonomously Resolve 80% of Common Customer Service Issues Without Human Intervention by 2029.
Forrester research on AI-attributed layoffs and rehiring, via HR Executive and The Register.
Harvard Business School — Shunyuan Zhang & Das Narayandas, When AI Chatbots Help People Act More Human.
Moffatt v. Air Canada, 2024 BCCRT 149.
Industry and vendor reports on AI containment and resolution rates in large deployments.
Benchmarks on AI hallucination rates in grounded/RAG systems and legal AI tools.


