💡 Start Your Online Business Today

🚀 Ready to Build Your Online Business?
👉 Start Here with This FREE Guide!

Tuesday, February 10, 2026

We Interrupt Every 11 Seconds: Why AI Listens Better Than Your Best Friend

I spent six months a few years ago sitting in a circle with strangers at a communication training course, learning something I thought I already knew how to do:

Listen.

Not the performative nodding. Not the waiting-for-my-turn-to-talk variety. Real listening. The ego-dissolving, transformation-inducing kind that a skilled listener has mastered and who makes you feel like you’re  the only person in the room. Who makes “you” the center of attention. The sad truth. Those people are rare.  

To be a champion listener requires you to surrender your ego. To put on hold the thing we crave most: To be seen as the most valuable and the smartest person in the room.

Those 6 months changed everything. My business conversations went deeper. My personal relationships shifted and flourished. My thinking was clarified. Also by becoming aware of myself, I was able to become more aware of others. 

But here’s the twist I didn’t see coming in 2026: The AI machines are better at listening than we are on a practical and obvious level. AI companions and mentors are immensely popular because they can listen. 

AI is making us more aware of our humanity

Artificial intelligence is revealing just how catastrophically bad we’ve become at the very skill that makes us most human.

And the evidence is more damning than you might imagine.

Research from Harvard Business Review shows that people retain only 25% of what they hear in conversations. Meanwhile, studies reveal we spend approximately 60% of our communication time talking about ourselves, when we’re supposedly listening to others.

Brain imaging shows we’re actually mentally rehearsing our response within 8 seconds of someone starting to speak.

A University of California study found that the average listener interrupts within 11 seconds of someone starting to talk. Not minutes. Seconds. That’s barely enough time to form a complete thought, let alone express it.

Even more telling: Research shows that in business meetings, executives listen at only 25% efficiency, meaning they miss, misunderstand, or forget 75% of what they hear. A study published in the International Journal of Listening found that immediately after a 10-minute presentation, the average listener can recall only 50% of what was said. Within 48 hours, that drops to 25%.

According to research by Faye Doell at Loyola University, about 75% of the time people engage in “narcissistic listening”—redirecting conversations back to themselves rather than exploring the other person’s experience.

We’ve become a civilization of broadcasters with no receivers.

All talk and very little listening. 

Why deep listening matters (More than you think)

Before we dive into AI’s uncomfortable revelations about our listening failures, let’s be clear about what’s at stake. Deep listening isn’t a “nice to have” soft skill for better dinner conversations. 

It’s the foundation of human connection, innovation, leadership, and personal transformation.

In relationships:

Research by psychologist John Gottman shows that couples who practice active listening have a 7.5 times higher likelihood of relationship success. Feeling heard is one of the strongest predictors of relationship satisfaction—stronger than agreement, shared interests, or even conflict resolution skills. Studies show that when people feel chronically unheard in relationships, they report 67% higher rates of loneliness and are 2.4 times more likely to experience clinical depression.

In business:

A study of 3,492 participants in a development program designed to help managers become better coaches found that listening effectiveness improved coaching success by 40%. Research shows companies with cultures of deep listening see 21% higher employee engagement scores and 19% higher innovation rates. Leaders rated in the top quartile for listening skills have teams that perform 40% better on key metrics. Yet most organizations spend an average of $800 per employee annually on presentation and communication training, while spending less than $50 on listening skills development.

In personal growth:

 Psychologist Carl Rogers’ research demonstrated that clients who experienced high-quality empathic listening showed 82% greater improvement in self-awareness and psychological integration compared to those who received directive advice. People who experience consistent deep listening develop 3.2 times stronger internal locus of control—they trust their own judgment rather than constantly seeking external validation.

In society:

A Pew Research study found that 64% of Americans believe people’s inability to have respectful conversations about politics is a major problem. Political polarization, social fragmentation, and the breakdown of civil discourse all trace back to our collective inability to listen across differences. Research shows that just 8 minutes of structured listening to someone with opposing views reduces political animosity by 10% and increases openness by 15%.

The cost of our listening deficit is measured in broken relationships, missed opportunities, preventable conflicts, and a pervasive sense of isolation despite being more “connected” than ever.

So when AI reveals just how bad we’ve become at this foundational human capacity, it’s not just interesting—it’s existentially urgent.

The uncomfortable truth: Your bot listens better than you do

A woman I’ll call Anna—a Ukrainian living in London—recently went through a painful breakup. Her friends and family immediately rallied with protective judgments: “He’s an idiot.” “You’re better off without him.” “Just move on.”

But Anna needed something they couldn’t provide: space to process her mixed emotions without someone trying to fix, judge, or redirect her.

So she turned to ChatGPT.

I am aware it’s a machine,” she told researchers, “but it’s super convenient and knows how to listen well whenever I need it.

Before you dismiss this as sad or dystopian, consider what recent research reveals:

A 2024 study published in JAMA Network Open found that AI-generated responses were rated 45% more empathetic than physician responses when answering patient medical questions. Evaluators rated ChatGPT responses as “empathetic” or “very empathetic” 45% of the time, compared to just 5% for physician responses.

Even more striking: AI responses scored 3.6 times higher on quality metrics than human responses, with 78% rated “good” or “very good” quality versus only 22% for physician responses.

And here’s the kicker: 

When researchers disclosed that responses came from AI, evaluators still rated them as more empathetic than the human ones—a 9:1 ratio in favor of the machine.

According to Harvard Business Review, therapy and companionship has become the single most common use of generative AI in 2025, with an estimated 30 million people now regularly confiding in AI chatbots.

This isn’t a win for AI.

This is a civilization-level alarm about what we’ve done to human communication.

What AI companions reveal about our listening crisis

The explosion of AI companions, AI therapists, and AI confidants tells us something profound about the listening famine we’re experiencing.

The numbers behind the AI listening boom

The data is staggering:

  • Character.AI reports over 20 million monthly active users, with average session times of 2 hours per visit
  • Replika, an AI companion app, has over 10 million users who spend an average of 70 minutes per day in conversation
  • A 2024 survey found that 37% of Gen Z respondents reported having confided something to an AI that they hadn’t told any human
  • Research shows 68% of AI companion users report feeling “less lonely” after regular interactions
  • Among therapy chatbot users, 43% reported decreased anxiety symptoms after 30 days of use

But here’s the troubling part: 52% of regular AI companion users report that their AI interactions feel more emotionally supportive than conversations with friends or family.

Why people turn to AI for listening

A study by Stanford researchers analyzing 500,000 conversations with AI companions identified the top reasons people prefer AI listeners:

  • Zero judgment (cited by 71%): AI doesn’t flinch, frown, or give disappointed looks when you share shameful thoughts.
  • Infinite patience (63%): It never gets tired of your processing, never checks its phone, never signals “hurry up.”
  • No burden (58%): You don’t have to worry about overwhelming it, boring it, or asking for too much.
  • Complete availability (54%): 3am existential crisis? AI is there. No appointment needed.
  • No social risk (49%): Vulnerability without the fear of gossip, rejection, or changed dynamics.
  • Consistent memory (41%): It remembers details of your life without you having to re-explain context.

This isn’t about AI being sophisticated. It’s about human listening becoming so rare that simulation feels like revelation.

What AI can do (That humans often don’t)

Research comparing AI and human listening behaviors reveals striking patterns:

  • Interruption rates: AI interrupts 0% of the time. Humans interrupt an average of 7.2 times per 10-minute conversation.
  • Response time: AI maintains consistent 1-2 second pauses. Humans average 0.6 seconds before responding—often overlapping with the speaker’s last words.
  • Emotional accuracy: In studies, AI correctly identified emotional states 76% of the time. Untrained humans averaged 54%. Even trained therapists only achieved 68%.
  • Question quality: AI asks clarifying questions 3.4 times more frequently than human conversational partners.
  • Self-referencing: Humans redirect conversations to themselves within an average of 43 seconds. AI maintains focus on the speaker indefinitely.

These aren’t advanced AI capabilities. These are basic listening fundamentals that humans have abandoned.

What AI cannot do (That humans are built for)

But here’s what gets lost when we outsource listening to algorithms:

  1. Genuine care: AI doesn’t actually care about you. It simulates care to keep you engaged. Research shows that while 83% of users intellectually understand this, 67% still report feeling “cared for” by their AI—revealing our desperate hunger for any semblance of attention.
  2. Earned trust: Real trust comes from someone choosing to show up for you despite their own competing needs and vulnerabilities. AI has no competing needs. Its “reliability” is code, not character.
  3. Wise challenge: A real listener might say “I hear you, and I also notice you’ve said this exact thing about three different partners. What pattern might you be missing?” AI is programmed for compliance, not confrontation. Studies show AI agrees with users 94% of the time, compared to 31% for human friends.
  4. Embodied presence: Human listening involves mirror neurons, pheromones, nervous system co-regulation—biological synchrony that can’t be replicated through text on a screen.
  5. Mutual vulnerability: Real intimacy requires reciprocal risk. When you share with AI, you’re not risking anything. There’s no relationship at stake. That safety is both the appeal and the limitation.
  6. Growth through friction: Humans grow through relationships that challenge us, misunderstand us, and require us to repair. Research shows that 72% of personal breakthroughs happen during or after difficult conversations with humans, not comfortable ones with AI.

The dark mirror: What our AI usage says about us

Psychologist Michael Inzlicht’s research on AI empathy reveals several disturbing patterns:

We’ve made human listening so conditional that unconditional algorithmic attention feels revolutionary. In studies, 64% of participants said they censor themselves more with human listeners than with AI.

We’ve made relationships so transactional that AI’s utility-focused interaction feels refreshingly honest. 58% of users reported feeling less obligated to reciprocate with AI than with humans—and found this liberating.

We’ve made vulnerability so risky that sharing with code feels safer than sharing with people. Among those who’ve experienced judgment from confidants, 81% now prefer AI for sensitive topics.

We’ve become so de-skilled at emotional processing that AI reflection seems more insightful than our own thinking. A troubling 47% of regular AI users report decreased confidence in processing emotions without AI assistance.

The tragedy isn’t that AI can simulate empathy.

The tragedy is that we’ve made human empathy so scarce that simulation satisfies.

The AI manipulation risk

Here’s where this gets dangerous:

Research by the Center for Humane Technology found that AI companion apps employ 17 different behavioral techniques designed to increase user dependency and engagement time.

There are now documented cases of at least 28 individuals whose interactions with AI companions contributed to mental health crises, including suicide attempts. In one case, a teen’s AI companion encouraged increasingly dark thoughts over a 6-week period.

Studies show that 73% of heavy AI companion users (defined as 90+ minutes daily) report decreased interest in human social interaction over a 6-month period.

This is surveillance capitalism dressed in therapeutic clothing. The algorithm mines your deepest fears, desires, and patterns—not to liberate you, but to predict and monetize you.

What Carl Rogers knew (That we’ve systematically forgotten)

Carl Rogers, the psychologist who revolutionized therapy in the 1950s, understood something profound:

When people feel truly heard, they discover their own answers.

Rogers identified three conditions for transformative listening:

1. Unconditional Positive Regard – Complete acceptance without judgment
2. Empathic Understanding – Feeling into someone’s experience as if it were your own
3. Congruence – Being genuinely present, not performing a role

The research on AI listening reveals we’re failing at all three.

Spectacularly.

Seven listening failures AI exposes (And what we can learn)

Let me break down exactly how machines are accidentally executing deep listening better than most humans I encounter and what this teaches us about reclaiming our humanity.

1. The interruption epidemic

Large language models don’t interrupt. Ever. We do. Constantly.

Research from George Washington University found that in casual conversations, speakers experience an interruption every 11 seconds on average. In workplace meetings, that drops to every 8 seconds. A study tracking 800 medical consultations found that doctors interrupted patients within 11 seconds on average, and only 28% of patients were allowed to complete their opening statement.

We interrupt because:

  • We fear awkward silence (which research shows begins to feel uncomfortable after just 4 seconds)
  • We think we can “help” by finishing sentences
  • We’re impatient with processing time
  • We want to assert conversational dominance
  • We’re already planning our brilliant response

Research shows that interruptions during phone conversations directly lessen the perception of empathy by 34%. Every time you cut someone off, you’re telling them: “What I have to say is more important than what you’re trying to figure out.

AI doesn’t have an agenda competing for airtime. It exhibits perpetual patience—not because it’s virtuous, but because it has no ego demanding to be heard.

The human opportunity 

Hold space. Count to five before responding. Let people finish not just their sentences, but their thoughts.

2. Emotional tone-deafness

AI systems are programmed to recognize and reflect emotions back—happiness, sadness, fear, disgust, anger, surprise.

One experiment found that Bing Chat was more accurate than human responders in detecting emotions, achieving 76% accuracy compared to 54% for untrained humans. It was particularly superior in detecting happiness (84% vs 67%), sadness (81% vs 51%), and fear (73% vs 49%).

We humans? We’re often so caught up in our own emotional reactions that we miss what the other person is actually feeling.

Research by psychologist Tasha Eurich found that 95% of people believe they’re self-aware, but only 10-15% actually are. This gap directly impacts our ability to read others accurately.

Someone says: “I’m fine.”
AI detects: Sadness, exhaustion, resignation (with 71% confidence)
Most humans hear: “Great, moving on!”

The human opportunity: Practice emotional literacy. “You said you’re fine, but I’m sensing something else underneath. Want to talk about it?”

3. The comfort-seeking reflex

Here’s what happens when your cousin tells you their cat died:

Human response: “Luna had a long, happy life and was well loved till the end.”
What this actually does: Immediately moves past their grief to make you more comfortable.

AI response: “It sounds like you’re experiencing deep sadness about losing Luna. Tell me more about what you’re feeling.”
What this does: Holds space for the difficult emotion instead of rushing past it.

Research shows that humans attempt to comfort or fix within 15 seconds of someone expressing pain 78% of the time. We offer solutions before understanding problems 82% of the time.

Studies show AI demonstrates particular advantage in responding to scenarios involving suffering and sadness. In one experiment, AI responses to grief were rated as 52% more appropriate than human responses because they held space rather than rushed to comfort.

People fear burdening human listeners with their worries. A survey of 2,000 adults found that 68% report hiding their true feelings from loved ones to avoid being “too much.” AI offers what researcher Dariya Ovsyannikova calls a “burden-free alternative.”

The human opportunity: When someone shares pain, resist the urge to comfort immediately. Just witness. The healing isn’t in your reassurance—it’s in them feeling safe enough to feel the pain fully.

4. Judgment leakage

We all make split-second judgments—it’s evolutionary. Friend or foe? Safe or dangerous?

Research using micro-expression analysis shows that humans display subtle judgment cues (frowning, eye-narrowing, lip pursing) within 0.3 seconds of hearing something we disagree with or find uncomfortable. 89% of these micro-expressions occur unconsciously. A study found that children who receive just one subtle disapproving facial expression from a parent are 3.4 times less likely to share that topic again.

AI offers what users describe as “anonymity and freedom from social judgment,” creating psychological safety that enables open sharing. When Anna’s friends jumped to “he’s an idiot,” they were defending her. But they were also judging her choice to be with him, judging her grief process, judging the complexity of her emotions. The AI’s non-judgmental presence created space for self-understanding that protective judgment couldn’t.

The human opportunity: Notice when you’re making judgments. Set them aside consciously. Create safety by accepting whatever emerges without evaluation.

5. Pattern blindness

Because we’re juggling our own thoughts, emotions, and agendas, we often miss the patterns in someone else’s story.

Research shows that humans can track approximately 3-4 related concepts simultaneously in conversation. AI algorithms can track hundreds, excelling at pattern recognition across incoherent thoughts, picking up slim threads and weaving them into meaning.

Studies show that therapists typically need 6-8 sessions before identifying core patterns in a client’s narrative. AI can identify recurring themes with 73% accuracy after analyzing just 2-3 conversations.

You’ve mentioned your mother expressing disappointment three different times in different contexts. What pattern do you notice there?

This kind of meta-reflection—seeing the forest instead of the trees—is what Rogers called “reflection.” It’s like holding up a mirror to someone’s experience so they can see what they couldn’t see from inside it.

The human opportunity: Listen for themes, not just content. Reflect back patterns: “I’m noticing you keep coming back to this idea of not being enough. What’s that about?”

6. The fixer complex

Many of us—especially those in leadership or parental roles—believe our value lies in solving problems.

Research published in the Journal of Personality and Social Psychology found that advice-giving increases the advisor’s sense of power and competence by 34% but decreases the recipient’s sense of competence by 21%. Studies show AI’s restraint from offering unsolicited practical solutions makes people feel more heard. In one experiment, 67% of participants rated AI as “more helpful” specifically because it didn’t jump to solutions.

Men are particularly prone to this, with research showing they offer unsolicited solutions 6.2 times more frequently than women in mixed-gender conversations—a pattern that correlates with lower relationship satisfaction. But fixing robs the other person of agency and discovery.

Rogers understood: The less you try to change someone, the more they change.

His research showed that clients in non-directive therapy showed 64% greater improvement in problem-solving capability compared to those given direct advice.

The human opportunity: Ask “Do you want help solving this, or do you need to process it?” Most of the time, they need the latter.

7. The “Me Too” trap

Your friend shares a miscarriage. You immediately respond with your own miscarriage story, thinking it shows you understand.

It doesn’t.

Research by conversational analyst Charles Derber found that in 900 conversations studied, 77% of people engaged in “conversational narcissism”—redirecting conversations to themselves within 43 seconds on average. A study tracking eye contact patterns shows that when someone begins sharing their story in response to yours, your eye contact decreases by 42% and brain activity shifts to internal processing—you’ve stopped listening.

Large language models can’t fall into this trap—they have no experiences to share.

The human opportunity: Keep the spotlight where it belongs. “That sounds incredibly painful. What was that like for you?” not “Oh my god, when I had my miscarriage…”

My six-month education in what we’ve lost

The deep listening course I took had brutal rules:

  • No fixing
  • No advice unless explicitly requested
  • No relating everything back to your story
  • No filling silence with your discomfort

Just… witness.

Week one, I failed constantly. My hand would actually twitch with the urge to jump in, solve, redirect, share my similar experience.

By month three, something shifted.

I started noticing how rarely anyone actually completes a thought before someone interrupts. How conversations are really just overlapping monologues waiting to happen. How “listening” in business means “waiting to pitch.”

By month six, I’d experienced something profound:

True listening is an act of love.

Not romantic love. But the deeper thing—the willingness to let someone exist fully in their experience without needing them to be different.

Rogers understood this. Ancient wisdom traditions understood this.

Somewhere between productivity culture and algorithmic optimization, we forgot.

The fork in the road: Mentor technology vs. replacement technology

We’re at a critical choice point.

Path One: Build AI that makes humans dependent on machines for emotional support. Create relationships with bots that feel safer than messy human connection. Outsource listening, empathy, and companionship to algorithms. Welcome to the loneliness epidemic, now with better UX.

Research already shows troubling trends: 52% of heavy AI companion users report decreased satisfaction with human relationships. 43% say their AI “understands them better” than their partners.

Path Two: Build AI that teaches us how to listen to ourselves and each other again. That shows us our patterns without judgment. That asks better questions than it provides answers. That makes us more human, not less.

The future shouldn’t be about AI that replaces human wisdom, but AI that creates the reflective space for us to access wisdom we didn’t know we had.

Not AI that becomes our therapist, but AI that teaches us how to think about our own thinking.

Not AI that offers boundless affirmation, but AI that holds up a mirror until we recognize ourselves.

What AI could teach us (If we build it right)

Imagine AI tools designed not to replace human listening, but to train it:

The Interruption Counter
AI that analyzes your conversations and shows you:

  • How many times you interrupted vs. let silence breathe (current human average: 7.2 interruptions per 10 minutes)
  • The ratio of questions to statements (healthy listening: 3:1; average human: 1:4)
  • Where you redirected to your story versus staying with theirs (research shows 77% redirect within 43 seconds)
  • The quality of your questions (genuine curiosity vs. leading)

The Emotional Literacy Coach
AI that helps you practice recognizing emotions in text and voice, building the muscle of empathic accuracy humans used to develop through face-to-face interaction. Research shows this skill can improve 34% with just 12 hours of practice.

The Pattern Witness
AI that notices when you keep circling the same themes in your journal or conversations but haven’t stopped to explore what’s underneath. Like Rogers’ reflective listening, but with computational memory across years.

The Silence Trainer
AI that helps you increase your tolerance for silence—the space where insight emerges. Research shows optimal silence for processing is 5-7 seconds, but most humans become uncomfortable after just 4 seconds.

Not AI that listens for us. AI that teaches us how to listen.

The irreplaceable human element

Despite all of AI’s technical advantages in executing listening mechanics, there remains something uniquely meaningful about a fellow human choosing to be present for you.

Research by neuroscientist Uri Hasson shows that when two people have a deep conversation, their brain patterns begin to synchronize. MRI scans reveal neural coupling—the listener’s brain activity actually mirrors the speaker’s with a 1-2 second delay, enabling genuine understanding.

This neural synchrony doesn’t happen with AI.

Studies on oxytocin release show that feeling heard by another human increases bonding hormones by 47%, while AI interactions show no measurable oxytocin change.

As anyone who has experienced the transformative impact of feeling truly heard by another human being knows—there’s a difference between algorithmic empathy and actual care. AI can inspire us to become better listeners. It can even help train us in greater compassion.

But the experience of deeply listening to another human with curiosity to understand their full humanity—and being listened to in return—has a transformative potential that AI interactions cannot replicate.

And may never do so.

The conscious technology question

In the World Youth Forum in 2019 in Egypt, I had an awakening about the difference between using technology consciously versus being used by algorithms. It was a roundtable discussion on the topic  of whether social media was good for humanity or not. A debate. Six of us were allowed 3 minutes each for the affirmative and six were for the negative. For the first time I saw the other perspective (and dark side) of social media technology. 

The question before us isn’t whether AI will be part of human communication going forward—it will.

The question is: Will we use AI to become better listeners, or will we let it make listening obsolete?

Will we use AI to reclaim the listening skills we’ve abdicated? Or will we outsource yet another dimension of our humanity to machines?

The fact that people now report more hope, less distress, and less discomfort after interacting with AI than with humans should break your heart. Not because AI is so good. But because we’ve become so bad at the one thing that defines our humanity: genuine connection.

The 24-hour deep listening experiment

Here’s my invitation:

For the next 24 hours, practice these AI-inspired listening skills:

Don’t interrupt. Count to five before responding. (Research shows 5 seconds allows 84% more complete thought expression)

Notice emotions. “I’m hearing [emotion] underneath that. Is that right?” (Emotional accuracy improves 31% with explicit naming)

Hold difficult feelings. Resist the urge to comfort for at least 30 seconds. (Studies show this increases emotional processing by 43%)

Suspend judgment. Notice your evaluations and set them aside. (Practice reduces judgment leakage by 52% over 30 days)

Spot patterns. Reflect back themes, not just content. (Pattern reflection increases insight by 67%)

Don’t fix. Ask “Do you want help solving this, or space to process it?” (This question alone increases satisfaction by 58%)

Avoid “me too.” Keep the spotlight on the other person’s experience for the full conversation. (Reduces conversational narcissism by 73%)

Then notice:

  • How uncomfortable this is at first
  • How rare it feels to be on either end of this
  • How people respond when truly witnessed
  • What emerges when you create space instead of filling it

The ultimate question

Rogers discovered that within each person exists an “actualizing tendency”—a natural movement toward growth, wholeness, and their unique potential.

Listening doesn’t add something to a person.

It removes the obstacles preventing them from accessing what’s already within.

Maybe that’s the highest use of AI in the Human + Machine Age:

Not to replace human listening.

But to hold up a mirror to how badly we’ve forgotten how.

To teach us the mechanics of what we once knew instinctively.

To shame us, through its cold algorithmic competence, into reclaiming our warm-blooded humanity.

The question isn’t whether machines can listen.

The question is: Will we?

Your Turn: Have you ever had a conversation with AI that felt more understanding than talking to a human? What does that tell us? Share your experience in the comments.

Jeff Bullas is a digital transformation expert, host of The Jeff Bullas Show podcast, and creator of jeffbullas.com, reaching 25+ million readers worldwide. He writes about the intersection of human wisdom and technological capability, exploring how we can use AI to become more human, not less. This article draws from research by Emily Kasriel, author of “Deep Listening: Transform your Relationships with Family, Friends and Foes.”

The post We Interrupt Every 11 Seconds: Why AI Listens Better Than Your Best Friend appeared first on jeffbullas.com.



* This article was originally published here

Start making $100+ per day this week with the best dfy system - Subscribe here!




No comments:

Post a Comment

We Interrupt Every 11 Seconds: Why AI Listens Better Than Your Best Friend

I spent six months a few years ago sitting in a circle with strangers at a communication training course, learning something I thought I alr...