Back to Blog

The Shadow AI Problem: Why Your Team Is Already Using AI (And What To Do About It)

·3 min read·Rachel Marcuse

Here's something we hear in almost every organization we work with: leadership is still debating whether to "adopt AI," while employees have been using ChatGPT for months.

They're drafting emails. Summarizing documents. Brainstorming ideas. Debugging code. And they're doing it quietly, without telling anyone, because they're not sure if they're allowed to.

This is the shadow AI problem. And it's more common than you think.

Why people hide their AI use

The reasons are understandable. Employees worry about:

  • Looking lazy. If AI helped me write this, does it still count as my work?
  • Job security. If I show how much AI can do, am I making myself redundant?
  • Getting in trouble. There's no policy, so maybe it's not allowed?
  • Judgment from peers. Will my colleagues think I'm cheating?

So they use the tools privately, clear their browser history, and present the output as entirely their own. The work gets done, but the organization learns nothing.

The cost of shadow AI

When AI use stays hidden, you lose the opportunity to:

Learn together. One person discovers a great way to use AI for client research. Another figures out how to draft first versions of proposals. But if no one shares, everyone reinvents the wheel alone.

Set appropriate guardrails. Without visibility, you can't know if someone is pasting confidential data into a public AI tool, or using AI for tasks where it shouldn't be trusted.

Build institutional knowledge. The prompts, workflows, and techniques that work become invisible. When someone leaves, their AI expertise leaves with them.

Calibrate expectations. If half your team is using AI and half isn't, performance comparisons become meaningless. What does "good work" even mean anymore?

Bringing AI into the light

The goal isn't to crack down on AI use — it's to make it visible and shared. Here's what we've seen work:

Declare amnesty. Make it clear: we know people are using AI, that's okay, and we want to learn from what you've figured out. Remove the stigma before trying to gather information.

Create space for sharing. A Slack channel, a monthly show-and-tell, a shared doc of prompts and techniques. Make it easy for people to say "here's what I tried" without feeling exposed.

Start with guidelines, not rules. Heavy-handed policies drive AI use further underground. Start with principles: be thoughtful about confidential information, verify important facts, don't misrepresent AI output as original analysis when that matters.

Model openness from leadership. When managers share their own AI experiments — including the failures — it signals that exploration is welcome and expected.

Acknowledge the weirdness. This is genuinely new territory. It's okay to say "we're figuring this out together" rather than pretending you have all the answers.

The opportunity in the awkward phase

Right now, most organizations are in an awkward in-between state. AI is too useful to ignore, but too new to have settled norms. That awkwardness is actually an opportunity.

The organizations that handle this moment well — that create cultures of open experimentation rather than hidden individual use — will learn faster, adapt more effectively, and build genuine capability rather than scattered, siloed tricks.

The shadow AI problem isn't really about AI. It's about trust, communication, and how your organization handles change. Get those right, and the technology part gets much easier.

Want to continue the conversation?

We'd love to hear your thoughts and discuss how these ideas apply to your organization.

Get in Touch