When your AI coach becomes a corporate spy

In the silence of her home office, Savika finally let her guard down. The AI chatbot window glowed on her screen—her digital coach, her late-night therapist. She typed her frustration about her new manager, her insecurities about presenting at leadership meetings, her fears of not being “good enough”. The bot replied with affirmations and breathing exercises. She felt better. She closed the laptop, thinking it was done.

But was it?

Welcome to the age of digital vulnerability, where your most personal workplace confessions to AI-powered tools may not be as private as you think. As artificial intelligence seeps deeper into corporate life—from recruitment to performance reviews, from mental health support to employee engagement—it brings not just automation, but a quiet question mark over privacy and trust.

The surveillance paradox

The promise is seductive. AI coaching tools are pitched as enablers, platforms that help employees manage anxiety, build confidence, or reflect on personal goals. Companies tout them as democratic access to development resources previously reserved for senior executives. But the trust woven into these platforms proves fragile.

“The employee will recognise it, Because he knows what he shared. The breach is no longer just technological. It’s human. It’s personal. “Every ounce of trust the organisation has built crumbles.”

Manish Majumdar, head – HR, Centum Electronics

Manish Majumdar, head – HR, Centum Electronics, doesn’t mince words when asked if AI conversations can be repurposed. “Yes, they can. Technologically, it’s possible. But the real question is: should they be?”

The answer, according to Majumdar, lies in a chilling hypothetical. Imagine an employee admits to feeling nervous before leadership presentations. The AI tool provides coaching on managing stage fright. The employee improves and delivers results. But later, during a promotion discussion, a senior leader refers—subtly but unmistakably—to that nervousness.

“The employee will recognise it,” says Majumdar. “Because he knows what he shared.” The breach is no longer just technological. It’s human. It’s personal. “Every ounce of trust the organisation has built crumbles.”

When intelligence fails

The problem extends beyond privacy to competence. Deepti Mehta, CHRO, Interface Microsystems, offers a frontline view of AI’s limitations. Her team experimented with AI-based coaching tools developed with inputs from certified coaches. Yet when real-life complexities were fed into the system, the responses were often inadequate or irrelevant.

“One English sentence can be interpreted a hundred different ways,” she explains. “Empathy, nuance, emotional intelligence—AI just can’t replicate that.”

“One English sentence can be interpreted a hundred different ways. Empathy, nuance, emotional intelligence—AI just can’t replicate that.”

Deepti Mehta, CHRO, Interface Microsystems

In one instance, five senior leaders undergoing mentoring were left dissatisfied by AI’s inability to capture context or offer emotional insight. Eventually, they sought face-to-face coaching sessions, rejecting what was marketed as intelligent support. “AI can do the backend job,” Mehta notes. “But you can’t blindly trust it.”

That distrust spills into daily HR operations. Algorithms may be programmed by humans, but they’re no match for the textured emotions, unspoken tensions, or deeply personal experiences that define real workplace issues. Yet the feedback fed into these AI systems becomes data—often stripped of context, empathy, or protection.

The human factor

For Rishav Dev, head -talent acquisition, Century Plywoods, the concern goes beyond usage to intent. “Highly-educated people can also be corrupt,” he says bluntly. “The ethical use of AI is questionable because it depends on who’s using it.”

“You never know what will be used against you in today’s world.”

Rishav Dev, head -talent acquisition, Century Plywoods

Dev recounts how an ex-colleague lost a substantial amount in a tech scam, reminding him that even the smartest among us are not immune to tech-enabled exploitation. “You never know what will be used against you in today’s world,” he cautions.

This vulnerability is what makes AI tools in the workplace so precarious. If even security experts can be outwitted by digital manipulation, how can the average employee feel safe sharing vulnerable truths with a chatbot? AI doesn’t commit the breach. Humans do.

The anonymity illusion

The problem isn’t confined to coaching tools. Majumdar shares a critical example from his company’s supposedly anonymous employee survey. He learned that the third-party agency could technically trace responses back to individuals. “But we consciously chose not to, because our goal wasn’t to know who said it. It was to understand what was being said,” he explains.

Not all organisations take the high road. Managers sometimes try to decode anonymous feedback, asking circuitous questions to guess who wrote what. HR leaders like Majumdar have seen this behaviour firsthand and actively work to shut it down. “You must focus on the content, not the source,” he asserts. Otherwise, feedback tools become a source of fear, not improvement.

The misuse extends beyond performance management. AI-generated data in compliance reports, behavioural analytics, or even court responses can transform internal tools into legal liabilities. Mehta’s team chose to abandon AI in compliance altogether after realising it created more confusion and duplicate work. “We’re not using it anymore,” she confirms.

“We couldn’t rely on it to interpret real-time workplace challenges.”

The cost of betrayal

Every misuse of AI feedback represents not just a betrayal but a business risk. If a breach is discovered, even subtly, it can spiral into a crisis of faith. Employees talk. Screenshots are shared. WhatsApp groups light up. Social media magnifies the story. In an age where employer branding is critical for talent attraction and retention, even a whisper of surveillance can become a reputational hurricane.

It only takes one instance. One employee who feels targeted. One breach of confidentiality. Once trust is broken, it’s near impossible to rebuild. Employees start withholding information. They disengage from surveys. They resist coaching tools. Some even exit quietly, taking their talent—and their trauma—elsewhere.

Building ethical guardrails

AI coaching tools have their place. They can democratise development, increase access, and offer data-driven insights. However, their use must be guided by unwavering ethical standards. Consent must be explicit. Data must be anonymised beyond reversal. And above all, intent must be transparent.

If organisations want to lead with care, they must remember: it’s not just about the data they collect—it’s about what they choose to do with it. The true measure of leadership isn’t what you know about your employees. It’s how you treat what they choose to share.

In a world where algorithms are always listening, the most powerful technology may still be trust. And unlike artificial intelligence, trust cannot be coded—it must be earned and maintained.

Leave a Reply

Your email address will not be published. Required fields are marked *