‘AI doesn’t understand anything’: Douglas Guilbeault on how AI inherits our prejudice

In what may be one of the most damning verdicts yet on the myth of “neutral” artificial intelligence, a landmark study published in <em>Nature</em> has revealed how mainstream algorithms — from Google Search to <a id=” captionrendered=”1″ data-src=”https://etimg.etb2bimg.com/photo/124603330.cms” height=”442″ href=”http://hr.economictimes.indiatimes.com/tag/chatgpt” keywordseo=”chatgpt” loading=”eager” source=”Orion” src=”https://hr.economictimes.indiatimes.com/images/default.jpg” type=”General” weightage=”20″ width=”590″>ChatGPT — systematically distort our perceptions of age and gender, making women younger, men more competent, and bias more subliminal than ever. If artificial intelligence is a mirror, what does it say about us that it sees women as younger, less experienced, and men as older and more competent?

sat down with study author Douglas Guilbeault, Assistant Professor of Organisational Behaviour at the Stanford Graduate School of Business, to discuss the study and to attempt to dismantle the illusion that technology can be an impartial tool at the workplace.

How algorithms inherit our prejudices

“Even if the algorithms were neutral, the fact that they rely on cultural data means that they can’t actually be neutral, because cultural data is never neutral,” shared Guilbeault.

Titled ‘Age and gender distortion in online media and large language models,’ the study was based on an analysis of over 1.4 million images and videos, showing that women were consistently represented as younger than men across every major digital platform — from Google to Wikipedia, IMDb and YouTube. “We found evidence of a particular stereotype between gender and age,” he explained. “Women are represented as significantly younger than men across occupations, even though there are no systematic age differences in the workforce.”

The distortion, he added, “is starkest in depictions of high-status jobs — roles associated with power or prestige. In those cases, women are shown at their youngest, and men at their oldest.”

But the study went beyond documenting bias — it revealed how algorithms amplify it. Participants in experiments who searched Google Images for occupations became more likely to associate women with youthfulness and men with seniority. “Googling for images of occupations actually intensified people’s mistaken belief that women were younger than men in those roles,” Guilbeault said. “And when they judged who looked like a good fit for a job, they preferred younger women and older men. These were everyday jobs — doctors and managers — not particularly ‘youthful’ professions.”

The research also, to everyone’s horror, found that these biases extended into AI hiring systems. When Guilbeault’s team used ChatGPT to generate and evaluate 40,000 resumes, they found a pattern of digital discrimination. “When it was generating resumes for women,” he says, “it assumed they were younger, that they graduated more recently, and that they had fewer years of experience. Then, when rating them, ChatGPT gave the highest scores to older men.”

“That means these systems don’t just reflect the biases of society — they operationalise them,” he added.

When asked what was driving this amplification, Guilbeault pointed to two distinct mechanisms. For Google, “a lot is driven by click‑through rates — what people choose to click on. The algorithm gives us what we ‘respond’ to, not what’s true. And that mirrors our biases because our choices aren’t always conscious or rational.”

ChatGPT’s distortion, he found, came from both its training data and its design. “These algorithms are especially prone to stereotyping because they’re built to generate the average person based on internet data — and that average person is profoundly biased. It’s like compressing an entire culture’s prejudices into a single model.”

“AI doesn’t understand anything”

Guilbeault’s most chilling observation concerned what he called the “illusion of understanding” at the heart of AI. “The way that humans, at our best, correct bias is by being intentional, reflective and empathetic,” he said. “None of that exists in AI right now. It’s just a statistical engine. These systems don’t understand anything — they’re constructing a world from data that’s already full of myths and misinformation.”

That lack of comprehension, he warned, has collided with Silicon Valley’s “move fast and break things” ethos. “They released products at a massive scale before checking how to handle the biases. Now they’re stuck pretending they can fix them. But the depth of the problem is enormous — it’s baked into the system.”

Though most data analysed came from the United States, Guilbeault replicated his results using IP addresses from Bangalore, Singapore, Amsterdam, Frankfurt, and Toronto. The outcome was the same everywhere. “So much of the global internet is influenced by American data,” he explains. “Our digital mirrors are made here — and they’re exporting a particular vision of power and desirability.”

He also suspects that this pattern has deep psychological roots. “It ties back to the idea that men are supposed to be in power, and older men in particular. You see this patriarchal model reproduced again and again in history, it’s hardly surprising that our algorithms learned it too.”

The language of inequality is subtle

Perhaps the most insidious biases were not visual, but textual. According to the study, a concern that was particularly alarming was the way AI systems encoded infantilising language, where the LLM‘s penalty against older women manifested as terms like “overqualified” or “less agile” for women. “If you look at workplace communication, women are far more often referred to as ‘girls’ than men are as ‘boys,’” he noted. “People may not think this matters, but language shapes how we perceive authority. It’s hard to imagine the CEO of a company being called a ‘girl’ — it undermines her credibility without anyone noticing.”

This, he argued, is what makes AI bias so dangerous: “It’s not overt sexism, it’s embedded in the everyday language of digital systems, giving discrimination the veneer of objectivity.”

No easy fix

Though some advocates have called for regulation, Guilbeault was cautious. “I’ll leave the legal questions to the lawyers,” he says dryly. “But what’s needed is cultural and educational change. People have started offloading critical thinking to algorithms that literally don’t understand truth. These systems aren’t ready for that level of trust.”

The problem might be beyond comprehension and the pivot from making an “AI fluent” workplace to one that might not mirror the worst aspects of the human condition with this human-machine contract we suddenly find ourselves navigating, no one can deny that the online literature and data training sets available to these LLMs will only be fixed if the underlying bias in human culture is fixed. The systems, however complex or deceitful they might be, learn from us. Which, Guilbeault noted, was the problem.

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about industry right on your smartphone!

  • Download the App and get the Realtime updates and Save your favourite articles.

Leave a Reply

Your email address will not be published. Required fields are marked *