We Love Technology. We Just Don’t Trust It on the Phone. Yet.
Every week, someone pitches us on an AI cold calling solution. “Replace your callers with our AI agent! It sounds just like a real person! It can handle objections! It never takes a day off!”
And every week, we politely decline. Not because we’re Luddites afraid of technology — we love technology. Our operation runs on power dialers, CRM automation, skip tracing algorithms, and data analytics. We’re not writing names in a ledger book.
But the actual phone conversation? The part where a human being talks to another human being about one of the biggest financial decisions of their life? That’s still a human job.
Here’s why.
The Current State of AI Calling (Honest Assessment)
Let’s acknowledge what AI calling can do in 2022:
What AI does well:
- Dialing at massive scale (thousands of simultaneous calls)
- Playing pre-recorded messages and detecting voicemail
- Simple appointment confirmations (“Your appointment is tomorrow at 2 PM. Press 1 to confirm.”)
- Basic surveys with multiple-choice responses
- Leaving voicemail drops consistently
What AI does poorly:
- Understanding context and nuance in real conversations
- Handling unexpected responses gracefully
- Building rapport and trust
- Navigating emotional situations (probate, divorce, financial distress)
- Adapting tone to match the caller’s energy
- Understanding accents, slang, and regional speech patterns
- Knowing when to push and when to back off
The AI cold calling products on the market right now fall into two categories:
Category 1: Glorified IVR Systems
These are essentially interactive voice response menus dressed up as “AI.” They play pre-recorded clips based on keyword detection. “If the person says ‘not interested,’ play clip 7.” These fool absolutely nobody and actively annoy the people being called.
Category 2: Conversational AI (GPT-Based)
These are the more sophisticated options that use large language models to generate responses in real time. They’re genuinely impressive in demos. The voice sounds natural, the responses are contextually relevant, and they can handle some basic back-and-forth.
But demos aren’t real calls. And real calls are messy.
Why Humans Still Win: The Nuance Problem
Cold calling motivated sellers isn’t like scheduling a dentist appointment. You’re calling people who are potentially going through the worst periods of their lives — facing foreclosure, dealing with a death in the family, going through a divorce, drowning in debt.
These conversations require something that AI fundamentally cannot do in 2022: genuine empathy.
Here’s an actual exchange from one of our caller recordings at Televista (details changed for privacy):
Caller: “Hi Martha, this is James calling about your property on Oak Street. Do you have a quick minute?”
Martha: “Oh… yes. That’s my mother’s house. She passed away three months ago and I just… I don’t know what to do with it. I can’t even bring myself to go over there.”
Caller: “I’m really sorry to hear about your mom, Martha. That’s incredibly hard. Take your time — there’s no rush on anything.”
Martha: [Long pause] “Thank you. Nobody else who’s called has said that. They just want to know if I’ll sell.”
Caller: “Well, I’m here to listen first. Whenever you’re ready to talk about the property — even if that’s not today — I’m here.”
That call resulted in an appointment. Martha eventually sold the property to our client for a fair price, and she specifically mentioned that she chose them because “the person on the phone was the only one who treated me like a human being.”
No AI system in 2022 can replicate that exchange. Not the words — an AI could generate similar words. But the timing. The pause after “take your time.” The warmth in the voice. The sincerity that made Martha feel heard.
An AI would have detected keywords like “passed away” and pivoted to a sympathy script. And Martha would have known. People always know.
The Compliance Minefield
Here’s the part that should scare any business owner considering AI cold calling: the legal landscape is a mess.
FTC’s Evolving Stance
The FTC has signaled increasing scrutiny of AI-generated voice calls. While the specific regulations are still developing, the trajectory is clear — using AI voices that don’t clearly identify themselves as artificial could be classified as deceptive practices.
TCPA Considerations
The TCPA requires that callers identify themselves and their organization. If an AI system is making calls and a recipient asks “Am I talking to a real person?”, what happens? If the AI says “yes” — that’s fraud. If it says “no” — the person hangs up. If it dodges the question — that’s arguably deceptive.
Some states are already moving to require explicit disclosure when a caller is AI-generated:
- California’s B.O.T. Act requires bots to disclose their non-human nature in certain contexts
- Several other states have similar legislation pending
Practical Risks
Even without explicit AI-calling legislation, the liability exposure is significant:
- Class action potential: One TCPA violation is $500-$1,500. An AI system making 10,000 calls with a compliance error is a $5-15 million exposure.
- Reputational damage: Getting caught using deceptive AI calls can destroy a company’s reputation overnight.
- Platform risk: Telecom carriers are increasingly flagging and blocking suspected robocalls/AI calls.
At Televista, we take compliance seriously because our clients’ businesses depend on it. Until the legal framework around AI calling is clear and settled, using human callers is the safer path.
The “Uncanny Valley” Problem
You know that creepy feeling you get when a CGI character in a movie looks almost real but something is slightly off? That’s the uncanny valley, and it applies to AI voice calls too.
Current AI voices are impressive. They can sound natural, match pacing, and even inject filler words like “um” and “you know” for realism. But when you’re on a real phone call and the conversation takes an unexpected turn, the AI’s response is just slightly… wrong. The timing is off by 200 milliseconds. The emotional response doesn’t quite match the context. The AI circles back to its talking points a little too smoothly.
Homeowners notice. Maybe not consciously, but subconsciously they feel something is off. And that subtle discomfort translates directly into distrust. Which translates directly into hang-ups.
When We’ll Reconsider (Because We Will)
We’re not AI skeptics — we’re AI realists. The technology is improving at a staggering pace. Here’s what needs to happen before we’d consider deploying AI callers:
1. Emotional Intelligence
The AI needs to detect and respond to emotional cues — not just keywords, but tone, pacing, pauses, and energy level. When someone sounds sad, the AI should slow down and soften. When someone sounds annoyed, it should get to the point. When someone is enthusiastic, it should match that energy.
We’re probably 3-5 years from AI that can do this convincingly.
2. Regulatory Clarity
We need clear federal and state guidelines on AI calling. What disclosures are required? What constitutes deceptive AI calling? What are the penalties? Until the rules are clear, the risk is too high for our clients.
3. Indistinguishable Quality
Not “pretty good” quality. Indistinguishable. The AI needs to pass the Turing test on a phone call — not just for 30 seconds in a demo, but for a full 5-minute conversation with an elderly homeowner who has questions, concerns, and emotional baggage.
4. Better Objection Handling
Current AI can handle scripted objections reasonably well. But real objections are messy, overlapping, and often unspoken. “I’m not interested” might mean “I’m interested but scared.” “My husband would never agree” might mean “Convince me so I can convince him.” Human callers read between the lines. AI reads the lines.
What We DO Use AI For
Just because we don’t put AI on the phone doesn’t mean we don’t use it. AI and automation are embedded throughout our operation:
- Predictive analytics: Using data models to score and prioritize leads before calling
- Voicemail detection: AI-powered voicemail detection so callers don’t waste time waiting for the beep
- Call transcription: Automated transcription of call recordings for QA review
- Sentiment analysis: Flagging calls that may need manager review based on tone/content
- List optimization: Machine learning to identify which list segments perform best
- Automated follow-up: AI-triggered text and email sequences based on call outcomes
This is where AI shines right now — augmenting human callers, not replacing them. Our callers are better because of AI tools. They’re just not replaced by them.
The Hot Take
Here’s our prediction: within 5-7 years, AI callers will be good enough for certain cold calling use cases. Probably appointment confirmations and basic qualification first. Then simple sales conversations. Eventually, maybe, complex emotional conversations.
But “eventually” isn’t today. And your business needs appointments today.
Right now, the best cold calling operation is a human caller with AI-powered tools — great data, smart dialers, automated follow-up, and quality analytics. That’s what we build at Televista.
The AI revolution in cold calling is coming. We’ll be early adopters when it’s ready. But we won’t be guinea pigs, and we won’t experiment with our clients’ reputations and lead flow.
Humans first. AI assists. For now.
Want to see what human-powered, AI-assisted cold calling looks like? Talk to the Televista team.