Shadow AI: What It Is and Why Your Security Team Can’t See It

Share it:

By Zakery Stufflebeam, Founder, Spartan Cyber Consulting | GCIH, GSOM, SANS Speaker, SANS Instructor

I want to start with something that happened during a recent SAGA assessment.

The client was a mid-market organization in a regulated industry. They had a mature security program by most measures: EDR deployed across endpoints, a functioning SIEM, documented policies, and a security team that knew what they were doing. When I asked leadership how many AI tools were running in their environment, the answer was confident. “We have an approved ChatGPT Enterprise license and that’s it.”

By the end of Week 1 discovery, we had identified 23 distinct AI tools operating across their environment. Eleven of those were browser-based tools running under personal employee accounts. Three were AI capabilities that had been activated automatically inside platforms they already licensed, tools that employees were using without any awareness that an AI was processing their inputs. Two were agentic tools with write access to internal systems that nobody in IT knew existed.

None of the 23 had been through a security review. None were in the asset inventory. The employees using them weren’t malicious. They were trying to get their jobs done faster.

This is shadow AI. And the reason your security team can’t see it has nothing to do with how good they are at their jobs.

What Shadow AI Actually Is

Shadow AI is any AI tool, model, or integration operating in your organization without IT knowledge, approval, or governance oversight. It is the AI evolution of the shadow IT problem that security teams have managed for a decade, but with one critical difference.

When an employee put a spreadsheet in an unauthorized Dropbox account, the data went in and sat there. When an employee pastes a contract into an AI tool on a personal account, the data goes somewhere much harder to track, into a model that may retain it, learn from it, or surface it in responses to other users. The data does not sit in a bucket. It becomes part of something larger and far less controllable (OffSec, 2026).

Shadow AI is not a fringe behavior. It is the norm. Research published by UpGuard in November 2025 found that more than 80% of workers use unapproved AI tools, including nearly 90% of security professionals themselves (Cybersecurity Dive, 2025). A BlackFog survey of 2,000 employees released in January 2026 found that 49% reported using AI tools not sanctioned by their employer (BlackFog, 2026). The National Cybersecurity Alliance found that 43% of AI users admitted to sharing sensitive company information with AI tools without their employer’s knowledge (Foley and Lardner, 2026).

Your employees are not the problem. The absence of governance is.

Why Your Security Team Cannot See It

This is the part that surprises most security leaders when I walk them through a SAGA engagement. The visibility gap is not a staffing problem or a tools problem in the traditional sense. It is a structural problem. Your security stack was not built to see this.

Here is where the gaps are.

Most AI tools are browser-based and require no installation. Your endpoint detection and response platform is looking for processes, executables, and installed applications. A browser tab pointed at an AI tool running under a personal Google account produces no alert, no process flag, and no entry in your asset inventory. It is invisible to conventional endpoint security (TechPR, 2026).

Personal accounts bypass your identity controls entirely. When an employee uses a personal ChatGPT account or a free-tier AI transcription service, they are authenticating to that service with their personal credentials. Your identity provider never sees it. Your CASB, if you have one, may catch the domain-level traffic but cannot see what data is being sent. The Netskope 2025 Cloud and Threat Report found that 47% of generative AI platform users access these tools through personal, unmonitored accounts (OffSec, 2026).

Embedded AI is activated without a deployment event. This is the category that consistently produces the most surprises in SAGA assessments. Vendors activate AI capabilities inside platforms you already licensed, Zoom, Salesforce, Microsoft 365, Adobe, Grammarly, often through product updates that require no separate installation and no new OAuth grant. Your security team did not approve a new tool because technically no new tool was deployed. The AI feature was simply turned on inside something you already trusted. Research from Acuvity found that 18% of organizations specifically worry about AI features embedded within approved SaaS applications being enabled automatically (Acuvity, 2026). That number understates the actual exposure because most organizations have never audited which AI capabilities exist within their licensed platforms.

Your DLP and IAM tools were built for a different threat model. Standard DLP solutions look for data patterns, Social Security numbers, credit card numbers, specific file types, moving to known risky destinations. They were not built to analyze the semantic content of AI prompts or detect that an employee just pasted three paragraphs of a confidential vendor agreement into a public AI tool. Standard IAM solutions manage identities within your identity provider. They cannot govern AI tools operating under personal credentials or track service accounts that AI tools create when integrated into your internal systems. According to research cited in CIO, standard DLP and IAM solutions are often completely blind to agentic AI tools operating with ephemeral identities (CIO, 2026).

Only 12% of companies can detect all shadow AI usage, according to data compiled by Second Talent in 2026. Only 30% of organizations have full visibility into employee AI usage. Gartner has estimated that by 2030, more than 40% of enterprises will face security or compliance incidents stemming directly from unauthorized AI use (OffSec, 2026).

What Is Actually at Risk

The data exposure risk is direct and immediate. Employees are sharing customer data, financial forecasts, legal contracts, source code, and protected health information with AI tools that have no data retention agreements, no SOC 2 reports, no DPA, and no contractual obligation to protect that information. In one widely reported incident, engineers at a major semiconductor company pasted proprietary source code into ChatGPT. That code later resurfaced in responses to other users. The breach could not be reversed (Cloud Security Alliance, 2025).

Shadow AI-related breaches cost organizations an average of $670,000 more per incident than standard data breaches, according to IBM’s 2025 Cost of a Data Breach analysis (Keepnet, 2026).

The regulatory exposure is compounding. Insurers are beginning to ask about AI governance in underwriting questionnaires. NAIC model law development is underway. The EU AI Act is already in force for organizations with European operations. HIPAA enforcement has not yet caught up to AI-specific scenarios, but the obligations around protected health information do not have a shadow AI carve-out. If a nurse practitioner is using an unauthorized AI transcription tool to summarize patient visits, and that tool stores audio on third-party infrastructure, your organization has a HIPAA problem regardless of whether IT knew about the tool.

Agentic AI introduces a third category of risk that most organizations have not yet started thinking about. Agentic tools do not just process data. They take autonomous actions: sending emails, calling APIs, modifying records, executing code, submitting forms. When an employee deploys an agentic AI tool inside your environment without a security review, that tool may be operating with the same permissions as the user who deployed it, with no human approval required for each action it takes. Security teams lack the agent visibility to understand what these tools exist, what they are authorized to do, or what data they are accessing (Noma Security, 2026).

Why Banning Does Not Solve It

I hear this regularly from security leaders: “We have a policy prohibiting unapproved AI tools.” That policy is not functioning as a control. It is functioning as a liability hedge.

The research is unambiguous on this point. Employees use AI tools because the tools work and the productivity gains are real. Blocking public AI domains pushes usage toward more obscure tools that are harder to monitor. OffSec’s analysis found that banning AI does not eliminate usage, it pushes it underground and strips away whatever limited visibility security teams had (OffSec, 2026). BlackFog found that 63% of employees believe it is acceptable to use AI tools without IT oversight if no company-approved option is provided (BlackFog, 2026).

A policy without a governance structure behind it is not a control. It is a document.

What Actually Works

The organizations that successfully manage shadow AI risk do three things that most organizations are not doing yet.

First, they build an inventory from evidence, not self-reporting. Asking employees what AI tools they use produces an incomplete list of tools employees are willing to admit they use. Building an inventory from SIEM logs, EDR telemetry, identity provider OAuth grants, DNS query data, and CASB logs produces a significantly more complete picture. The gap between what employees report and what technical discovery finds is, in my direct experience, always larger than the organization expects.

Second, they audit vendor platforms for embedded AI capabilities. Every major SaaS platform your organization uses should be reviewed for AI capabilities that have been activated or that can be activated by end users. This is not a one-time exercise. Vendors add AI features in product updates continuously, often without prominent communication to IT or security teams.

Third, they build governance ahead of the inventory, not after it. Organizations that try to govern AI tools reactively, by responding to incidents or to what employees report, are always behind. The organizations that get ahead of this problem establish an AI acceptable use policy, a classification framework defining what data can and cannot be used with AI tools, and an access review process that includes AI tools before the next wave of shadow adoption arrives.

In regulated industries, insurance, financial services, healthcare, and professional services, the governance gap is also a regulatory gap. Framing AI governance as a compliance obligation rather than just a security preference tends to move it up the priority list.

The Question Worth Asking Today

If your board asked you right now to list every AI tool operating in your environment, what access those tools have to your data, and what governance exists to manage the risk, could you answer?

Most organizations cannot. That is not a criticism. The problem moved faster than any governance program in history, and the tools that security teams rely on were not built to solve it.

The first step is visibility. You cannot govern what you cannot see, and you cannot see shadow AI with the tools that were built before it existed.

If you want to understand what is actually running in your environment, that is exactly what the SAGA methodology was built to find out.

Schedule a scoping call with Spartan Cyber Consulting


References

Acuvity. (2026). 2025 State of AI Security. acuvity.ai

BlackFog. (2026, January 27). Shadow AI threat grows inside enterprises. blackfog.com

Cloud Security Alliance. (2025, March 4). AI gone wild: Why shadow AI is your worst nightmare. cloudsecurityalliance.org

CIO. (2026, April). Shadow AI morphs into shadow operations. cio.com

Cybersecurity Dive. (2025, November 12). Shadow AI is widespread and executives use it the most. cybersecuritydive.com

Foley and Lardner. (2026, April). For your eyes only? Not quite: Shadow AI in the workplace. foley.com

IBM. (2025). Cost of a Data Breach Report 2025. ibm.com

Keepnet. (2026, March 14). What is shadow AI? keepnetlabs.com

Noma Security. (2026, February 16). Shadow AI agents: Why untracked AI is the new shadow IT. noma.security

OffSec. (2026). Shadow AI: How unsanctioned tools create invisible risk. offsec.com

Second Talent. (2026, March 3). Top 50 shadow AI statistics 2026. secondtalent.com

TechPR. (2026). What is shadow AI? A complete guide for enterprise security teams. techpr.online

Leave a Reply

Discover more from Spartan Cyber Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading