A program director at a Baltimore nonprofit is trying to finish a funding report before end of day. She pastes a set of client case notes into ChatGPT, gets a polished summary back in under a minute, and moves on. What she didn’t clock is that those notes included client names, diagnoses, and home addresses of the people her organization serves.
A project manager at a Maryland construction firm drops a subcontract agreement into an AI chatbot before a call, looking for a quick summary of the key terms. The contract has pricing structures, liability clauses, and the general contractor’s proprietary specs embedded in it. All of it goes with the paste.
At a manufacturing facility outside Fredericksburg, a floor supervisor asks an AI tool to help him write up a production issue. He copies in the incident log, which references equipment configurations, supplier part numbers, and shift scheduling data his company considers confidential.
None of these people made a reckless decision. They made a fast one. And across the Mid-Atlantic, decisions like these are happening dozens of times a day in organizations that haven’t yet drawn a clear line around AI use.
AI tools earn their place. Used well, they help lean teams move faster and work smarter. But there’s a boundary worth knowing, and once your team understands it, it’s straightforward to hold.
Why This Is Happening Right Now
AI adoption has moved faster than most organizations’ internal policies. A recent study found that over 75% of knowledge workers are already using AI tools at work, and in most cases, their employers have no formal guidance in place for how those tools should be used.
At the same time, compliance frameworks are catching up. Cyber insurance carriers are beginning to ask about AI use in renewal questionnaires. CMMC guidance is evolving to include AI-related data handling. And when something goes wrong, a data exposure, a vendor complaint, an audit finding, “we didn’t have a policy yet” is not a defense that holds.
The gap isn’t technological. Most of the tools your staff are using are not inherently dangerous. The gap is that speed has become the default, and without defined boundaries, fast decisions fill the space that policy hasn’t covered yet.
The Real Risk Is Not the Tool
AI doesn’t create new categories of risk. It accelerates existing ones. The same data that was sensitive in an email, a shared drive, or a printed report is just as sensitive when it’s pasted into a chatbot prompt. What changes is the speed and invisibility of how it travels.
Most consumer-grade AI tools, including the free versions your staff are likely already using, are built to learn from the conversations fed into them. Depending on the tool, what you enter may be stored, reviewed, or reused to train future versions. There’s no alert when that happens, and no easy way to retrieve the data afterward.
For nonprofits managing donor records or client services, that’s a HIPAA and confidentiality exposure. For construction and manufacturing firms, it’s a risk to trade secrets, bid data, and vendor relationships. For any organization working toward compliance with CMMC, NIST, or cyber insurance requirements, it’s the kind of gap that surfaces at exactly the wrong moment.
This is where cybersecurity guidance matters most, not because the tools are bad, but because boundaries that haven’t been defined yet are decisions being made by default, one paste at a time.
Data That Should Never Enter an AI Tool
Personal and Protected Information
Any data covered by privacy law belongs off-limits. Client names tied to addresses, health conditions, or financial circumstances. Social Security numbers. Donor records. Employee HR files. Any details about minors or vulnerable individuals your organization serves.
For nonprofits especially, the people you support often come to you because they have nowhere else to turn. Their trust is not something to trade for a faster turnaround.
Financial and Contract Data
Bid sheets, subcontract agreements, invoices, and vendor pricing are competitive intelligence your business depends on. A construction company’s margin structure or a manufacturer’s cost-per-unit took years to build and takes one paste to expose.
Entering it into a public AI tool is functionally the same as emailing it to an unknown recipient.
Credentials and Access Information
This one seems obvious until it happens. Passwords, API keys, internal network addresses, and VPN configurations should never appear in a prompt, even buried in a longer piece of text. Someone asking an AI to help troubleshoot a system issue sometimes includes far more technical context than they realize.
Client or Partner Communications
Emails, meeting summaries, and vendor correspondence often contain context that wasn’t meant to travel outside the organization. Negotiating positions, relationship dynamics, pending decisions. Even a summarized version carries risk if it lands in a tool with no data governance in place.
Legal and Compliance Documents
Contracts under review, litigation details, audit findings, or anything flagged by your legal team belongs in a controlled environment. If your organization is navigating CMMC compliance or HIPAA requirements, your IT partner should be helping you build AI use policies as part of that framework, not as an afterthought.
What Is Actually Safe
There’s a wide range of useful work AI handles well without touching anything sensitive. A simple framework helps: if the information is non-sensitive, non-identifiable, and already public, it’s generally fair ground.
Drafting public-facing content is a clear example. A nonprofit communications manager writing a donor newsletter, a construction firm creating a project update for a client, a manufacturing team building an FAQ for new hires: all of these work well when no private data enters the prompt.
Summarizing publicly available research, generating first drafts of job descriptions, brainstorming marketing angles, or formatting content that’s already been approved internally all sit comfortably in the safe zone. General policy language, industry news, publicly shared documents: all fair ground.
A practical test worth sharing with your team: if this prompt were printed on a piece of paper and left on a table at a coffee shop, would finding it be a problem? If yes, it stays out of the tool.
For organizations that want to go further, enterprise AI platforms with proper data governance agreements open up more use cases. When a tool is configured to not retain data, with auditing enabled and access controlled, the risk profile changes significantly. That conversation is worth having with your IT team before AI use scales across your staff.
The Human Factor
Most AI-related data exposure comes from people trying to do their jobs faster, not from carelessness or bad intent. The gap is almost always in the conversation that never happened: someone didn’t know where the line was, so they made a judgment call in the moment.
AI risk is not a technology problem. It’s a decision-making boundary problem. When staff don’t have a clear framework for what’s in or out, speed fills the gap, and speed without guardrails is where exposure happens.
Organizations that manage this well share a few habits. They put a written AI use policy in place before it’s needed. They give staff a plain-language list of data categories that require caution. And they build a culture where asking before acting feels easier than apologizing after.
For teams working in nonprofits, construction, or manufacturing, sensitive data moves through daily operations constantly. A clear policy gives your people something to lean on when the decision comes fast.
How to Build a Simple AI Policy Your Team Will Actually Use
A blanket ban on AI tools rarely holds. People find them useful, and they’ll keep using them with or without guidance. The more durable approach is a policy that’s short enough to remember and specific enough to act on.
Start by listing the data categories your organization handles regularly. Then ask honestly: which of those might an employee be tempted to run through an AI tool to save time? That list becomes the foundation.
From there, document which tools are approved for which tasks, where to escalate edge cases, and how often the policy gets reviewed. A one-page internal document covers most of what teams need. It doesn’t have to be complicated to do its job.
Frequently Asked Questions
Can AI tools store the data I enter into them?
Many consumer-grade tools can, depending on their terms of service. Free versions of popular AI chatbots often retain conversation data to improve their models. Enterprise plans typically offer data retention controls and privacy agreements, but those need to be configured and verified, not assumed.
Are paid AI tools automatically safer than free ones?
Paid plans often include better data governance options, but “paid” alone doesn’t guarantee privacy. The relevant questions are whether the tool offers a data processing agreement, whether retention can be disabled, and whether usage is logged and auditable. Your IT partner can help you assess specific tools against those criteria.
How do we train staff on AI use without making it feel like a restriction?
Frame it around empowerment rather than prohibition. Staff who understand the “why” behind data boundaries are far more likely to apply good judgment than those who just receive a list of rules. A short internal guide, a 20-minute team conversation, and a clear escalation path for edge cases cover most of what’s needed to start.
What if AI tools are already being used across our team without a policy?
That’s the most common starting point, and it’s fixable. Begin with a brief audit of which tools staff are currently using and for what purposes. From there, you can build a policy around what’s already happening rather than starting from scratch. The goal is clarity, not correction.
Do You Know How AI Is Actually Being Used Across Your Organization?
Most leadership teams don’t. AI tool usage tends to spread informally, person to person, without IT visibility into what data is being entered or which tools are being used. That’s not a people problem. It’s a visibility gap.
The OmegaCor COR Assessment gives you that visibility. It reviews your cybersecurity posture, surfaces the operational gaps, including AI-related risk, and gives you a clear, prioritized picture of what to address next. You’ll leave with specific recommendations, whether you work with us going forward or not.
Schedule your COR Assessment. It takes less time than you’d expect, and it gives your leadership team something concrete to work with.
