The Best AI Users Aren't Technical. They're Social.
The Best AI Users Aren't Technical. They're Social.
My AI agent dragnet surfaced this paper last week. I have a system that scans for research I might find interesting, and this one hit differently.
I've spent nearly 20 years in leadership and management roles. I spike high on social and emotional intelligence assessments. And I've noticed something: my results with AI tools are atypical compared to most people I work with. I can intuitively model what the AI knows and doesn't know, and that seems to matter.
This paper explains why.
So why is it then that some people get remarkable results from AI while others—using the same tools, the same models, sometimes even the same prompts—get garbage? Christoph Riedl (Northeastern) and Ben Weidmann (UCL) set out to answer this question rigorously. They ran a study with 667 people solving problems—math, physics, moral reasoning—both alone and with AI assistance, testing GPT-4o and Llama-3.1-8B, and used Bayesian Item Response Theory to separate what makes someone good at working alone from what makes them good at working with AI.
The findings broke several assumptions I didn't know I had.

Working With AI Is a Separate Skill
This is what stopped me cold: being good at something doesn't make you good at doing it with AI. These are fundamentally different abilities.
The researchers measured two abilities separately—individual ability (θ) and collaborative ability (κ). These aren't correlated the way you'd expect. Top performers working solo weren't automatically top performers when paired with AI. The model comparison was decisive: ΔELPD = 50.9 (SE = 10.2), strongly favoring the two-ability model over a single-ability model. In plain terms: treating "working with AI" as a separate skill from "working alone" predicted people's actual performance far better than assuming one ability explains both. The statistical evidence isn't marginal—it's overwhelming.

"Our approach transforms human-AI collaboration from an anecdotal observation into a measurable, optimizable dimension of AI capability."

You might be thinking: surely the smartest people get the best results from AI? The data says otherwise. This changes everything about how we should think about AI proficiency. It's not "are you smart enough to use AI well?" It's "do you have the specific skill of collaborating with AI?"—which turns out to be a different thing entirely.
AI Helps Novices Most, But Experts Still Win
Two effects are happening simultaneously, and they seem contradictory until you think them through:
Equalizing effect: Lower-skilled users got the biggest relative boost from AI. The correlation between individual ability and AI boost was strongly negative (ρp = -0.91)—that's nearly a perfect inverse relationship. Translation: the worse you are at something solo, the more AI helps you. AI closes skill gaps.
Complementarity effect: But the highest-ability users still achieved the best overall performance when working with AI (ρp = 0.47)—a moderate positive correlation. Translation: experts with AI still beat novices with AI. The gap shrinks, but the ranking doesn't flip.
Think of it like electric bikes. The amateur cyclist gets a bigger relative speed boost than the pro. But the pro on an e-bike still finishes first. You want to be the pro on the e-bike, right?

[Aside: This makes sense when you think about it. AI can give you a correct answer, but you still need to know if it's the right answer for your situation. Noting that expertise doesn't disappear—it shifts to evaluation and integration.]
One more finding worth noting: the biggest gains came on the hardest problems (ρs = 0.67)—a strong positive correlation between problem difficulty and AI benefit. AI acts as a cognitive amplifier precisely when you're pushing the edge of what you can do alone. That's not intuitive, right? You'd expect diminishing returns at the frontier. But the opposite is true.

The Secret Ingredient Is a Social Skill
This is the finding that got me: the skill that predicts AI collaboration success has nothing to do with technical ability.
It's Theory of Mind.
I know, I know—"Theory of Mind" sounds like academic jargon. But stick with me.
ToM is the ability to reason about what someone else knows, believes, and intends. It's how you anticipate what information they need, recognize when they're confused, and adapt your communication accordingly. It's a deeply social skill—the thing that makes human conversation work.
The study found ToM strongly predicted collaborative ability with AI (coefficient: 0.65, 95% CI: 0.01-1.29). Translation: higher Theory of Mind scores meant better AI collaboration, and we can be 95% confident this relationship is real—the confidence interval stays positive. But here's the kicker: ToM had no significant correlation with solo ability (0.41, 95% CI: -0.17 to 0.99). That confidence interval crosses zero, which means we can't rule out that the relationship is actually nothing. ToM helps you work with AI. It doesn't make you smarter without AI.

Users with strong ToM do things like:
- Provide context: "I'm a beginner in physics, explain this simply"
- Establish rapport: Simple acknowledgments like "thanks" or "okay" to guide the dialogue
- Seek confirmation: "Is my approach correct?" or "Could you be wrong?"
They're not treating AI like a search engine. They're treating it like a collaborator with its own knowledge state that needs to be understood and managed. They're running something like a mini-OODA loop in their heads: observe the AI's response, orient to what it might be missing, decide what context to add, act with a refined prompt. This parallax—seeing the interaction from the AI's perspective, not just your own—is key.
"Sophisticated thinking rarely occurs in isolation; it emerges instead through dialogue, feedback, refinement, and the integration of diverse perspectives."
Your Mindset Changes the AI's Output
Now it gets really interesting.
ToM isn't a fixed trait. It fluctuates moment to moment. And those fluctuations directly affect the quality of AI responses.
The researchers decomposed ToM into a stable user-level trait and dynamic within-question variation. Both mattered. The trait effect was significant (β = 0.27, p < 0.001)—your baseline ToM predicts how well you'll do with AI in general. But the moment-to-moment variation also had a significant effect (β = 0.09, p = 0.007). Translation: even controlling for your general ToM level, actively engaging your ToM during a specific interaction improves the AI's output on that specific problem. Your mindset in the moment matters, not just who you are.

When users actively tried to understand the AI's perspective—asking for justifications, building on previous responses ("this is not what I meant, the answer should be formatted as a fraction")—the AI produced better output for that specific problem.
The quality of an AI's response isn't just a property of the model. It's an emergent outcome of the interaction. This is a starkly different mental model than "AI as tool."
What This Means
Frankly, this explains something I've watched play out repeatedly: why some people thrive with AI tools while others struggle with the same models, the same prompts, the same tasks. It's not about technical sophistication. It's about whether you can model another agent's knowledge state and adapt accordingly.
The irony is thick. The most important skill for working with artificial intelligence is one of the most fundamentally human skills we have.
You don't need to take a Theory of Mind course. But you might want to shift how you approach AI. First and foremost, this is about mindset:
- Slow down. The cognitive shift from "tool user" to "collaborator" might matter more than the specific words you use.
- Model the AI's knowledge state. What does it know? What's it missing? What might it get wrong?
- Treat confusion as signal. When the AI gives a bad response, ask what information it lacked—then provide it.
- Acknowledge and redirect. Simple social moves ("thanks, but that's not quite right") actually affect output quality.
The question isn't "how smart is the AI?" It's "how smart are we with the AI?"
We're spending billions on making AI smarter. Maybe the better investment is making ourselves better collaborators. The skill is learnable. The payoff compounds. And unlike model improvements, no one can take it away from you.
And maybe—if we all really lean into developing this skill—we can create a future where AI genuinely augments human capability rather than replacing it. Where the people who thrive aren't the most technical, but the most deeply human.
Wouldn't that just be something?
Links & Resources
The paper:
- Quantifying Human-AI Synergy — Riedl & Weidmann (2025)
- OpenReview submission — ICLR 2026
The researchers:
- Christoph Riedl — Northeastern University
- Ben Weidmann — University College London
Background:
- Theory of Mind — Wikipedia
- Item Response Theory — Wikipedia
- ChatBench — The benchmark dataset used (Chang et al., 2025)
Related research:
- Team Players: How Social Skills Improve Team Performance — Weidmann & Deming (2021)
- Quantifying Collective Intelligence — Riedl et al. (2021)