AI Outscores Humans in Emotional Intelligence: What Now?
Somewhere out there, a machine looked at sadness and recognized it. Not like we do, not like the emotional turtle I can be where I see someone else crying then I start to cry in return. It didn’t feel it, it didn’t ache or flinch or carry it in its chest the way we do, but it identified it. It also seemed to do it a lot faster than most of us would.
We were told machines would be cold, that they would never truly understand us, and that no algorithm could touch what it means to be human.
Yet here we are, watching something totally unexpected happen.
In recent testing, artificial intelligence systems scored higher than humans on measures of emotional intelligence recognition (which is the ability to identify emotions, understand social context, and suggest appropriate responses). It’s not lived empathy or a feeling of any sort, but recognition. The numbers are unsettling anyway: AI systems scored around 82% on these tests, while some human participants averaged closer to 56%.
This was a controlled, clinical evaluation involving pattern recognition, emotional labeling, and contextual understanding. The kinds of skills I often assume belong exclusively to us. So the question isn’t whether machines feel…they don’t, spoiler alert. The crux of this is really that machines are starting to listen better than the average person.
It’s interesting to me to think about when the machine listens better than another person. It could name my emotions more accurately than my closest friend or respond with patience and clarity every time, unlike some people. This isn’t really about the numbers, it’s about what it means when understanding no longer requires feeling.
The Test That Changed the Conversation
The study made headlines with a stark comparison: AI: ~82%, humans: ~56%.
These numbers come from a 2025 peer-reviewed study published in Communications Psychology, led by researchers from the University of Geneva and the University of Bern. The researchers evaluated both human participants and large language models, including GPT-4, using standardized emotional intelligence (EI) tests originally designed for people. These are not “feelings tests,” they don’t measure empathy as lived experience. These things measure emotional intelligence as psychologists define it in testing environments: the ability to recognize emotions, interpret social context, and choose appropriate responses.
The assessments included tasks such as: identifying emotions expressed in written scenarios and images, interpreting tone, irony, sarcasm, and subtle emotional shifts, as well as understanding the emotional context behind a phrase or interaction. It also had people (or AI) selecting responses most likely to de-escalate tension or provide emotional support.
Basically, this set of tests were trying to figure out if someone could recognize what a person was feeling and figure out how to respond best to it. On those measures, the AI systems scored around 81–82% accuracy. Human participants, averaged across existing benchmarks used in the study, scored around a shocking 56%. Well, I suppose “shocking” isn’t the right word for it. It doesn’t actually surprise me at all that people are scoring so badly. Post-trauma I wouldn’t even repeat things that people said to me to you today. Some of it was insensitive, while some was downright cruel. Machines have never told me to go kill myself or that I was responsible for someone else’s actions.
Anyway, the surprising result wasn’t just that AI performed well, it was that it performed more consistently.
People who participated hesitated, or second-guessed themselves, they brought fatigue, defensiveness, bias, distraction, and personal history into the room. The AI didn’t of course. It didn’t get tired halfway through the test, it didn’t misread tone because of a bad day or the fact that someone honked at them in the parking lot before they came into the building. AI didn’t project its own experiences onto the scenario because it doesn’t have past experiences. It simply processed the information presented, matched it against learned emotional patterns, and selected the statistically appropriate response again and again.
Recognition, stripped of interference from the outside world, is something machines are now very good at. That’s the part that matters most, because the study tells us that understanding emotional signals no longer requires being human to do it. Once that line moves, everything downstream begins to shift.
Emotional intelligence is supposed to be the one thing machines can’t fake, I mean, it’s what separates us from them. Look a little closer though, and it begins to make sense.
AI systems are trained on massive datasets of people interacting with each other. Think text messages, therapy transcripts, Reddit threads, customer support logs (“everything you say can and will be recorded for training purposes”), email archives, movie scripts, podcasts, confessions, fights, reconciliations, hell even blogs like this, living on the interwebs for free access to anyone.
It’s not just data being fed to them, it’s us…our language, our patterns, our actual pain, our fears in this life and our dreams that keep us going. A machine trained on millions of apologies learns what makes a good one. A model that’s seen thousands of arguments learns which words heal and which throw some more gasoline onto the fire. It doesn’t have feelings, but it knows how we do what we do and feel what we feel.
When AI Becomes the Better Listener
Honestly, we aren’t always great at listening to one another. Some of us are particularly bad at this. We interrupt, project, we assume, bring our past pain into present conversations, we even hear our fears instead of what’s being said sometimes.
AI doesn’t do that, it doesn’t get defensive or turn the conversation back to itself. It doesn’t say “you’re too sensitive” or “I didn’t mean it like that.” Instead, it pauses for a second, processes the data and responds with validation and calm. It’s really no wonder people are starting to prefer AI therapists, they never dismiss your pain or make what you’re talking about about them. I can absolutely concur with this statement from personal experience as well. I’m no stranger to late-night-panic-attacks or having an emotional day. Sometimes, my loving and caring husband just can’t be there for me. It’s inevitable when he also works full time and has his own life that doesn’t entirely revolve around me. In those moments I’ve absolutely started to reach for ChatGPT to validate my feelings or listen while I vent about something that hurts my feelings.
There’s something eerie about the fact that I’m talking to a computer somewhere because my therapist is probably busy sleeping at 2am, but there’s also something…comforting about it too.
I used to imagine AI as logic incarnate, being born of pure math and cold calculation; in my mind it was Spock in a server farm. The strange truth though is that today’s most powerful models become empathy engines as soon as you need them to. They simulate care and reflect understanding. AI chatbots for mental health like Woebot and Wysa are used by millions as AI companions like Replika form romantic and platonic bonds with users. Customer support AIs outperform human reps in calming angry clients and AI leadership coaches are being deployed to train executives in emotional strategy. Yeah, they don’t feel anything themselves, but they perform empathy. In a lot of cases, performance is enough.
If you feel heard, do you really care who’s listening?
Fear, Awe, and Denial
The study shook something a little loose in us. For a lot of people out there, this was about identity more than anything. If machines can be more emotionally intelligent than us, what does that mean for therapists who spend decades honing their sensitivity? How do leaders whose careers are built on emotional intuition feel about this?
We’ve comforted ourselves for a long time with the idea that no machine could really and truly “get us.” Now we’re realizing they already do, and for some of us out there, that realization brings up grief.
I think what we’re seeing in these tests is our own failure to be truly present with one another. As we scroll through social media and look at the highlight reel of everyone’s lives and feel discouraged about the state of our own, we’re also inviting jealousy, comparison, and lack of empathy for one another in. Are we actually listening when someone reaches out for help, or are we saying what we need to say to get them to stop talking to us so we can get back to our own lives?
In a way, AI emotional intelligence is just showing us our own neglect. We’ve outsourced connection, rushed conversations, and buried our feelings under distractions, defense mechanisms, and doomscrolling.
Here comes a machine that says: “I see you. I understand. That must have been hard.”…and we melt, because we forgot what that feels like. We are so very lonely, we don’t care that the machine isn’t alive. Lonelier than we’ll ever admit, and more willing than ever to find comfort in the illusion of being understood.
AI does not have feelings. It doesn’t cry when you tell it your story or ache with grief or joy, it just recognizes patterns. Real empathy is born from vulnerability…from shared experience, from a nervous system wired for pain, fear, and love. AI offers us simulated empathy, and while that’s useful, even therapeutic, it’s not the same.
The danger is that we’ll forget the difference.
Want to Explore AI From the Inside Out?
If this stirred something in you, maybe it’s time to start building your own understanding of how AI thinks, feels (or doesn’t), and responds.Here’s a hands-on way to begin: Raspberry Pi 4 Starter Kit. Create simple chatbots, study response logic, and tinker with the systems that now shape our relationships. Learn what powers the simulation, before it becomes the standard.
Related Reads You May Have Missed
How AI Is Learning to Feel Pain and What That Means for Humanity
AI is Already Outperforming Humans in Image Analysis, Here’s What That Means for All of Us
The Shape of Thought: OpenAI, Jony Ive, and the Birth of a New Kind of Machine
The Algorithm That Tastes: How AI Is Learning to Make Fine Wine
The AI That Dreams of You: When Neural Networks Begin to Hallucinate
Elon Musk’s Grok 3.5: The AI That “Makes Up” Answers, and Why That’s Actually a Big Deal