People trust AI code but not the people who use it. That’s a real problem.
The Quiet Bias Undermining AI Adoption at Work
We’ve got the tech. But do we trust the people using it?
I read a study last week that’s been bothering me ever since. Over a thousand engineers were asked to evaluate a Python script. Some were told the code came from a fellow engineer. Others were told the same code came from an engineer using AI assistance. The code was identical. But the evaluations weren’t.
On average, engineers rated the AI-assisted ones as 9% less competent. Same work, lower score. And it got worse: women and older workers were penalized even more.
Let that sink in. We’re not evaluating the work—we’re judging how the work was done. And somehow, “with AI” signals lower skill, even if the outcome is the same.
What this means inside real companies
The unspoken message
I’ve been part of dozens of AI rollout conversations over the past year. Tools for writing emails, forecasting sales, generating code, managing documents—AI’s in everything now. Leadership keeps saying: “Use the tools. Be efficient. Save time.”
But here’s what employees are hearing: “Sure, use AI—but don’t let anyone know you did.”
Because the moment you say, “I used ChatGPT,” people wonder if you’re cheating. Or lazy. Or worse, not smart enough to do it yourself.
That’s not a technology issue. It’s a culture issue.
Good tools don’t fix bad culture
Most companies are pouring time and money into AI training, plugins, licenses, and dashboards. That’s all fine. But none of it works if the people using the tools are quietly punished for doing so.
This is where operations and leadership need to get honest: are we measuring outcomes, or are we still hung up on the process?
The performance paradox
Outcomes vs. effort theater
Here’s a story. One of our analysts built a slick new report that cut weekly prep time in half. It pulled live data, automated cleaning, and published to Looker in two clicks. Brilliant work. When she showed it to the team, someone asked, “Wait—you used Python and AI for this?”
You could feel the shift in the room. The conversation turned from “This is amazing” to “Well, how much did she actually do?”
Never mind that the old version took 6 hours and she’d reduced it to 30 minutes. She’d used the wrong kind of effort.
There’s still this worship of grind. As if working harder—typing more, clicking more, struggling more—is proof you earned the outcome. That mindset kills innovation.
What leaders should be doing instead
1. Make AI use visible—and valued
Don’t just allow AI—encourage it. Praise the outcome and the method. Share examples of work made better through AI, and name the people behind it. Otherwise, your employees will keep AI use quiet to protect their reputation.
2. Train evaluators, not just users
Yes, train people on how to use tools like Copilot or ChatGPT. But also train managers and peers on how to fairly assess AI-assisted work. If the standard is output, let that be the focus.
And if collaboration with AI leads to better results—faster code, smarter writing, cleaner data—that should count as a win, not a shortcut.
3. Watch for double standards
If someone young and male uses AI, do we call him efficient? Innovative? But when someone older or female does the same, is it “cheating” or “not real work”?
Bias doesn’t just happen in hiring—it happens in feedback, reviews, and casual comments. Keep an ear out. Interrupt it.
This isn’t about AI. It’s about trust.
Tools come and go. But trust is the infrastructure everything else is built on. If people don’t trust each other to use AI responsibly—or if they’re punished for using it at all—then it won’t matter how many licenses or workshops you roll out.
I’ve said it before, and I’ll keep saying it: culture eats strategy for breakfast. And in this case, culture is quietly telling people, “You’d better not make it look too easy.”
That’s a mindset we can’t afford.
Book Recommendation
The Fearless Organization by Amy Edmondson
It’s a practical, readable take on psychological safety and what really drives team performance. Especially relevant if your company’s pushing innovation but people are scared to actually try something new.
What do you think?
Have you seen this AI double standard at work—or maybe felt it yourself? I’d love to hear your take.
0 Comments