You just got an email from your CRM vendor regarding a data breach. Three months ago, they auto-enabled a new feature for AI that summarizes calls without requiring consent. It's been sending deal notes to a third-party model without a documented DPA. You have three days to inform your regulator. You don't have a data flow map to describe this, because nobody identified it as an AI tool. And nobody thought to.
This is the accuracy of what's to come. It's not some big ChatGPT drama. It's a minor feature that your procurement systems weren't accustomed to detecting.
The issue isn't that you don't have an AI strategy. It's that your AI strategy is based on the tools you were planning to implement and is ignoring all of the new tools that were added without your knowledge.
Who This Affects
Compliance managers, fractional CISOs, and SaaS founders with small governance team structures are most impacted by the AI Act’s enforcement. This is not because of a lack of caution or preparation by these individuals, but rather an unfortunate outcome of the AI Act’s timeline shifting. The EU AI Act’s provisions of high-risk AI systems, including hiring, credit assessment, management and education, will be enforced by the Act's date of August 2, 2026. Additionally, in 2026, California SB 53 will require AI systems’ safety and risk frameworks to be enforced and published.
Varying levels of shadow and breach AI systems also now has a significant cost differential for the first time: high-shadow AI systems are experiencing breach costs on average of around $670,000 greater than low-shadow AI systems. Therefore, shadow governance systems should be considered a financial concern and not only a compliance concern.
Regulatory Radar
Starting August 2, 2026, the EU’s AI Act mandates human oversight, a documented conformity process, and data governance records if any AI in your stack, even unintentionally, is used in any HR, lending, or education decision making. Although your vendor may be liable under the EU Digital Services Act, you, not your vendor, may be liable under the Act. Simply claiming that “our vendor handles that” will not protect you from an Article 17 audit, nor will it protect your company from liability.
Under the 2035 US AI Regulation, the FTC will tighten AI enforcement in relation to the intense manipulation of AI systems, and the US will instruct the FTC to state when AI outputs, due to the AI’s misleading of consumers, are considered an “unfair and deceptive act or practice.” A new law is not needed, as this will happen under existing statutes. Under the US AI Regulation, if the manipulated or ineffective outputs of AI systems mislead consumers, even if AI speech is an inherent characteristic of the system, the vendor is liable.
The highest ranked AI exploit in the OWASP 2026 GenAI Exploit Roundup is easily third party AI infrastructure. The lack of defensibility in your AI plan should be evident by the lack of AI systems that defend your AI systems, and the absence of AI systems or entities that will break the plan. If your AI supply is not accountable within two to five of your direct vendor supply, your plan should be defensible.
Why AI Is No Longer a Single Tool
Companies that considered AI governance a snap decision have a unforeseen mess on their hands. A typical 80-person SaaS company leverages AI in their CRM, their support platform, their coding platform, their productivity suite, and several other tools (often AI tools) that employees have chosen to adopt without oversight. Because of that, those tools have different frameworks, and different models running on different infrastructures, often under different (governing) terms, and in many cases using and exposing the same customer data.
The number that may cause some "discomfort": Gartner has predicted that 40% of enterprise applications will feature built-in AI Agents by the end of 2026. It is not 40% of AI tools that will be integrated in enterprise applications. It is the 40% of the enterprise applications that companies use across their functional departments. If the vendor review process performs the oversight by catch of tools that purported to be “AI products”, then, in this case, it is not even the tip of the iceberg.
The Visible Stack vs. The Invisible Stack
Most organizations can name their visible AI stack in ten minutes. It's the list in the IT register: the tools you deliberately evaluated, contracted, and (in better-run shops) put through a vendor risk review.
The invisible stack is what's running that nobody formally approved:
Visible Stack | Invisible Stack |
Deliberately purchased AI tools | AI features auto-enabled inside existing SaaS |
Approved model API integrations | Consumer GenAI on personal employee accounts |
IT-managed tools with known data flows | Vendors using AI in their own internal operations |
Documented model versions and updates | Third-party models your vendors' products call |
Signed DPAs covering AI data use | ToS changes that modified AI data use after you signed |
The invisible stack didn't get past your controls. Your controls weren't designed to see it.
The Five Layers of the Invisible Stack
Layer 1 — AI Features Inside Tools You Already Approved
This is the most rapidly expanding layer. Your CRM now auto-summarizes sales calls. Your HR platform auto-ranks applicants. Your email client drafts replies. You approved these tools before those features existed. The model behind them, the data retention policies, and the sub-processor arrangements are most likely different from what you reviewed at procurement because the features were not there yet.
Layer 2 — Personal Accounts Your Employees Are Using for Work
Someone from your team is pasting customer data into free ChatGPT account to summarize a meeting. Someone else is processing contract drafts through Claude from their personal account. This is not a lack of training. People resort to these options because it saves them time. This addresses the lack of governance because data is removed from your ecosystem and processed without your control, without a processing agreement, without response to a data subject request, and without an audit.
Layer 3 — Your Vendors' Internal Use of AI
Your payroll processor could use an AI tool to assess payroll anomalies. Your legal research vendor could use an AI tool to auto-generate legal research memoranda. They are processing your data with AI. That forms your AI supply chain. Quite a few vendor risk assessments ignore this. Quite a few vendors do not disclose this unless they are directly asked.
Layer 4 — Agents and Automations Nobody Is Watching
A Zapier workflow uses an AI step to assess inbound leads, a Make scenario drafts responses to customers, and there’s a custom GPT someone from Marketing created during the weekend that runs every Monday morning. These systems act. They decide. In many instances, the creator has since departed, and there is no knowledge on how to deactivate, let alone analyze the process they trigger.
Layer 5 — Model Versions That Change Without Notice
Within your intentional stack, the tools you evaluated, approved, and documented, do you understand what the current operating model version is? Most SaaS providers update their model and do not classify it as a significant change. The operational behavior shifts. Any accuracy bias you initially set in your baseline review of the tool may no longer exist. You’ll discover this when something spits out a result that’s beyond explanation.
Why Lineage Gaps Are a Legal Problem, Not Just an Audit Finding
In AI, data lineage requires answering questions like which data informed this decision, what model was involved, what was the version, and what was the result. AI-assisted decisions cannot be meaningfully explained without answering these questions, impacting subject access requests, and making it impossible to complete post-incident reviews. Most importantly, these questions need to be answered before AI-assisted decisions can be explained to the satisfaction of a regulator.
The EU AI Act, Article 10 is focused on providers of high-risk AI and requires them to document where the training data comes from. If you are using high-risk AI tools in HR, credit, or access management, it naturally falls on you to document training data provenance. If your vendor is unable to document a change log and you are unable to document the change log, you have a data lineage gap. This gap retains the risk until a candidate, a customer, or a regulator asks a question you are unable to answer.
Why "We Approved the Tool" Is Not Enough
Procurement Approval is an evaluation of a technology tool at a specific instance in history. It can only assess a technology tool based on the capabilities it had at the time of the review. It does not address the potential AI features a vendor may add to their technology tool in a future update. It does not account for the new sub-processor the vendor adds to their privacy policy. It does not address the policy revision exposing ToS contracts and methods of customer data.
Once you only focus on approval based governance, you are always looking to the past. It is equally important to develop a light-weight, convenient, sustainable, and continuous governance framework. It should identify and track the AI features and updates of the technology tools. Moreover, it should facilitate vendor ToS changes and provide employees with a tool to declare technology tools that are committed to breach. The fundamental question to address at all stages of AI in your organization is not, "Did we approve it?" The real question is, "Are we committed to breach?"
The 10-Question AI Stack Diagnostic
Work through these with whoever owns IT security and vendor management. A vague answer counts as a no.
Do you have a complete list of tools in your stack that, in the past six months, have triggered the use of an AI feature tool, including tools that are not marketed as AI-based products?
Do you know the specific AI model (for example, is it GPT-4o, Claude 3.7, Gemini 1.5?) that powers the AI features that your team most frequently uses?
Do you know your top ten SaaS vendors and in what ways they use AI to process your data (in their features to you, in their functions, not to you, in their self-services)?
Have you examined the changes in your primary suppliers' Terms of Service in the past 90 days, specifically for clauses regarding the use of AI data?
Is there a technical or policy method that you use to find out if and when your employees use their own AI accounts for work?
For each AI agent or automated workflow active in your organization, is there a person to contact, a short description of what it is, and a way to stop it?
For each AI feature that your operations use or your products use, do you know the version of the AI model?
If a data subject access request is submitted today, would you be able to list each AI system that interacted with your data?
Is your incident response plan ready and prepared for a breach that uses AI and is not self-constructed?
Has a new AI feature been added to your stack since the last vendor review, and has that new AI feature not triggered a new review of that change?
More than three no's means your invisible stack is already ahead of your governance.
Free Takeaway: Your AI Stack Is Bigger Than Your AI Tool List
Most governance programs struggle to identify AI's impact because they lack an AI Stack Visibility Map. This isn't just an approved tools list, it's a record of all of the ways AI intersects with your data, your decisions, and your customers. This is a record of official and unofficial tools, AI agent shadow tools, hospitality vendors that process your data using AI models never written, and the AI agent shadow tools.
You shouldn't aim to build an all encompassing version on your first pass. Even an incomplete version containing a five column spreadsheet will show gaps that a regulator, auditor, or an incident will find long before you do. The goal is to create governance that reflects the reality of your AI environment instead of your best guess.
Self-Check
Five yes/no questions. Your first answer is usually the right one.
Could you hand a regulator a complete list of AI touchpoints in your stack within 24 hours if they asked today?
Have you reviewed your key vendors' AI-related terms in the last quarter?
Does every agent and automated workflow in your environment have a named owner right now?
Is there a process for catching new AI features before your employees or vendors activate them without review?
Does your current incident response plan explicitly cover an event that originates from a vendor's AI system?
→ Run the AI Enterprise Readiness Checklist this week: It's designed for teams who do not have an AI governance team that outlines all five layers of your AI stack mapped to specific, practical governance controls. Now that the August 2 EU AI Act high-risk deadline is under three months away, this is the fastest method to uncover the gaps in your current governance program before another, potentially less friendly, party does. Download it for free - https://www.the-ai-governance-brief.com/products/ai-enterprise-readiness-checklist