Your Employees Are Using ChatGPT With Your Company Data. You Probably Don't Know.
Sixty-eight percent of employees use AI tools at work that their employer never approved.[1] Not a small number of rogue outliers. Most of your staff, across most industries, is already doing this.
This week, Pittsburgh is hosting the NFL Draft and Carnegie Mellon is showcasing the future of AI at its Robotics Innovation Center at Hazelwood Green, positioning the city as a serious hub for the technology.[3] That's all genuinely exciting. But for the businesses that make up most of this region, the AI question right now isn't "when do we build a robot?" It's more immediate: what are your employees already doing with AI tools, and does anyone know?
What Shadow AI Actually Looks Like Day to Day
An account manager needs to write a client proposal fast. She opens ChatGPT, pastes in the client's background notes, last year's contract terms, and her pricing structure, and asks it to clean up the draft. She's not trying to cause a problem. She's trying to move fast.
Meanwhile, your HR person pastes a job description with internal salary ranges into an AI summarizer to tighten the language. Your bookkeeper uploads a spreadsheet directly into an AI tool to help with analysis. Your sales rep feeds a customer support transcript into ChatGPT to write a follow-up email.
None of this feels like a security incident while it's happening. That's what makes it hard. People aren't being reckless. They're being efficient, using the same tools they use at home, on company data they happen to have open in front of them.
The Numbers Are Hard to Ignore
Seventy-eight percent of employees who use AI at work are using tools they brought themselves, not tools approved or configured by IT.[1] Thirty-eight percent have shared sensitive business information with an AI tool without any permission from their employer.[1]
Here's the part that really sticks with me: 16.9% of sensitive data exposures, close to 100,000 documented incidents, happened on personal free-tier accounts that IT had zero visibility into.[1] Those aren't breaches in the traditional sense. No one broke in. The data left through the front door, one paste at a time, in accounts nobody was monitoring.
The Samsung case is the one security people reference the most. Three engineers leaked proprietary source code, internal meeting transcripts, and chip yield test data into ChatGPT within a single month before the company caught it.[2] They weren't trying to steal anything. They were doing their jobs, just faster. The damage was real regardless.
Most Small Businesses Have No AI Policy
When I talk to small business owners about this, the response is almost always the same: "We haven't thought about that yet." And that's fair. AI tools moved from curiosity to daily habit very fast. Writing policy hasn't kept up with adoption.
But having no guidance is itself a choice, and not a good one. You may have a solid handle on your email security, your backups, your antivirus. None of that touches what happens when an employee pastes your client list into a free consumer AI account. Those two things don't talk to each other.
The exposure isn't theoretical. If your business handles customer data, financial records, medical information, or anything under a confidentiality agreement, that data leaving your controlled systems, even unintentionally, can mean real liability.
A Few Practical Steps That Don't Require a New Budget
Start by asking your team what AI tools they're currently using. Don't make it an interrogation. Make it a real conversation. You'll probably find out your staff is using five tools you didn't know about. That's useful information, not a discipline problem.
Then pick an approved option. If you have Microsoft 365, Copilot is already available at the enterprise tier with data protections built in. That's a better path than letting everyone default to personal ChatGPT accounts. Same capability, much more control over where your data goes.
Write a short AI usage guideline. One page. What's allowed, what isn't, which tools are approved, what categories of data to keep out of AI platforms. Put it in your onboarding docs and review it once a year. That's it. You don't need a legal team and a 30-page policy to do better than nothing.
Pittsburgh is putting itself on the map as an AI city, and I think that's worth celebrating. But for the businesses that actually run this region, healthcare offices, law firms, contractors, staffing companies, the AI conversation this week should include a look inward at what's already happening inside your own walls.
Not sure what AI tools your team is using, or want help putting together a straightforward usage policy? We can help. Send us a message or call (412) 307-8313. We work with Pittsburgh businesses on exactly this kind of practical, no-nonsense IT guidance.
- Second Talent, "Top 50 Shadow AI Statistics 2026: Real Data on Hidden AI Use," secondtalent.com
- Cybernews, "From Shadow IT to Shadow AI: Employees are sneaking ChatGPT and other tools into work," cybernews.com
- Carnegie Mellon University, "Carnegie Mellon University and AI Strike Team to Showcase the Future of Physical AI During NFL Draft Week," cmu.edu
- The Hacker News, "The Hidden Security Risks of Shadow AI in Enterprises," thehackernews.com