AI Sprawl: What It Actually Is and How It Starts Inside Companies

AI sprawl is the uncontrolled spread of AI tools, models, agents, and embedded features across a company without central visibility, shared rules, or a single owner. It usually starts quietly: one paid ChatGPT seat here, a vendor flipping on an AI feature there, a marketer vibe-coding a small app over the weekend. Six months later, finance is paying for eleven overlapping subscriptions, IT can't list them, and nobody can tell if any of it is actually paying back.
Most writing on this topic frames AI sprawl as a security problem. It is, but that's not where it starts. It starts as an adoption problem with no architecture behind it. This piece walks through the five places AI sprawl actually comes from, how to inventory it without slowing anyone down, and the decision checklist we use at Tech10 when a company calls us in because the bill has gotten strange.
What AI sprawl actually means in 2026
AI sprawl is the unmanaged growth of AI inside a company. Uncontrolled models, tools, agents, and AI-enabled features accumulate across teams, vendors, and cloud accounts without a shared inventory or shared rules. The word "uncontrolled" is doing real work here. It doesn't mean "unauthorized." Most of the tools in a sprawling stack were approved somewhere. They just weren't approved together.
Zapier's October 2025 survey of 550 enterprise C-suite leaders at companies with 1,000+ employees found that 28% already use more than ten different AI apps, 70% haven't moved beyond basic integration, and 76% have had at least one negative outcome because of disconnected AI. Another 31% say they discover new "rogue" AI tools inside their organization every single month. That's the shape of the problem: not one bad decision, but hundreds of small ones nobody connected.
Here's a quick vocabulary check so the rest of the article lands:
- Shadow AI is one subset of sprawl. These are AI tools employees adopt without IT knowing. It's the loudest part, not the biggest.
- Embedded AI is the quietest part: a vendor you already paid for switches on an AI feature (sometimes a paid add-on, sometimes not).
- Agent sprawl is the newest part: autonomous agents built by a single person, given that person's full permissions, quietly running in the background.
- Token sprawl is the most expensive part: spend that grows with usage inside tools you already pay a flat fee for.
Sprawl is the roof over all four. If you only manage one, the rest keeps spreading.
The five sources of AI sprawl most companies miss
Think of AI sprawl the way a warehouse manager thinks about inventory: you can't control what you haven't counted, and different items come in through different doors. The five doors below show up in almost every engagement I've seen.
1. Shadow AI: tools employees bring in without asking
A marketer pays a month for ChatGPT Plus on a personal card and expenses it as "research." A sales rep signs up for a free Claude account and drops customer notes into it for summarization. A support lead uses a browser extension to auto-reply to tickets. Individually, each choice is rational. Together, they're a data governance problem.
Signal it's happening: expense reports contain the words "productivity," "AI assistant," or specific tool names your IT department doesn't recognize. A quick grep through 90 days of expenses usually turns up five to ten tools nobody approved centrally.
2. Embedded vendor AI: features that turned on quietly
Your CRM, your helpdesk, your analytics platform, your design tool: most of them shipped AI features in the last 18 months. Some are free. Some added a line to your renewal without much discussion. Some quietly send your data to a model provider you didn't audit.
This is the door most companies forget to watch. You already approved the vendor. You didn't approve what the vendor started doing with your data after the last release.
Signal it's happening: a renewal comes back 20-40% higher than last year and the sales rep describes the bump as "new AI capabilities."
3. Vibe-coded internal apps: prototypes that went into production
One person on the team spends a Saturday building a small tool with Cursor or Claude Code. They show it Monday. The room loves it. By Wednesday, twelve people are using it. By the following Monday, it's hosted on somebody's personal Heroku, connected to a production database, and nobody outside that team knows it exists.
The AI Guys podcast flagged this pattern well: the prototype is a gift, the production version is a tax. The person who built it now owns support, uptime, and token costs, usually on top of their actual job.
Signal it's happening: somebody on the team starts fielding "hey, is it down?" messages for a tool that isn't in your tool list.
4. Team-level pilots that never ended
Marketing piloted one AI content tool. Ops piloted another for ticket triage. Product piloted a third for user research. All three were 30-day trials. All three became permanent without ever being renewed deliberately. Each one picked a different model provider. Each one has its own billing relationship.
Signal it's happening: three different departments can't agree on which AI tool is "the one the company uses" because each one has its own.
5. Citizen agents: autonomous agents built by a single person
This is the newest door and the scariest one. An agent, in plain terms, is an AI that doesn't just answer questions but takes actions on its own: reading emails, clicking buttons, sending messages, writing to databases. An employee builds one that reads their inbox, writes drafts, pulls data from the CRM, and posts to a Slack channel. The agent runs with that employee's full permissions. If the employee leaves, the agent keeps running until somebody notices. If the agent gets prompt-injected (tricked by hostile instructions hidden inside text it reads, like a malicious email), it has the keys to everything the employee had.
Reco, Okta, and Gravitee all wrote good pieces on this through 2025 and 2026. The thing they agree on: most companies don't have an inventory of agents. They have an inventory of people.
Signal it's happening: you can't answer "how many autonomous agents run in our company today?" in under five minutes.
How to inventory sprawl without slowing everyone down
The first instinct most leaders have is to send a survey. Don't. Surveys capture what people are willing to admit. You need what's actually running.
I run a three-layer audit in the first two weeks of any engagement where sprawl is the concern. Same three layers every time, because they map to the five sources above:
Layer 1, the financial layer. Pull the last 90 days of expense reports, credit card statements, and SaaS billing. Grep for AI, GPT, Claude, Gemini, Perplexity, Copilot, Anthropic, OpenAI, and any vendor name you recognize. This finds shadow AI and most citizen agents. It usually takes half a day. Budget is on: COO or CFO.
Layer 2, the vendor layer. For every SaaS tool in your stack with a known contract, check the release notes from the last 12 months. Any mention of AI, ML, smart, auto, or intelligent is a candidate. Ask the vendor directly: what AI features are on by default, what data leaves our tenant, which model provider do you use. This finds embedded vendor AI. Budget is on: IT or procurement.
Layer 3, the network layer. Pull egress logs or API call metadata for the last 30 days. Filter for known AI provider endpoints (api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, and the cloud-native ones on Azure, AWS Bedrock, and Vertex AI). This finds vibe-coded apps and citizen agents. Anything calling a model directly shows up. Budget is on: security or platform engineering.
Three layers, three owners, three deliverables. Two weeks. At the end, you have a real list. Not a complete list, because nobody ever has a complete list, but a real one, ranked by cost, access, and blast radius.
The decision checklist: what do you do with each tool once you find it?
Once the inventory exists, every item on it gets one of four labels. This is the part where most companies stall, because the instinct is to either approve everything or kill everything. Both are wrong.
| Label | When to apply it | What happens next |
|---|---|---|
| Keep and sanction | Real use, real value, works with approved data boundaries | Move to the central stack, add to the official tool list, budget for it |
| Keep and contain | Useful but risky, e.g., a vibe-coded app used by six people | Move hosting, add SSO, assign an owner, re-evaluate in 90 days |
| Consolidate | Duplicates a tool you already have with ~80% overlap | Plan a migration, set a sunset date, communicate before you cut |
| Kill | No real use, no clear owner, or a compliance red line | Cut access, cancel billing, document the decision |
The one I see fumbled most often is "Keep and contain." Teams either refuse it (too risky, kill it) or rubber-stamp it (too useful, just leave it alone). Both responses train employees to hide the next one. Containment, with a clear owner and a review date, is what keeps the signal flowing.
Why fast consolidation backfires (the part nobody tells you)
Cutting sprawl too aggressively is its own problem. Part of what looks like sprawl is actually useful experimentation. If you rationalize the stack before you understand what's driving the duplication, you cut the wrong thing and the team just routes around you.
The honest answer: some sprawl is information about what people need that procurement hasn't figured out yet. If three different teams independently picked three different AI writing tools, that tells you something about the category. Maybe the problem is the tool. Maybe it's that you never gave them a sanctioned one.
The posture I'd take: track everything, contain the risky stuff immediately, consolidate the obviously duplicated stuff slowly, and leave a small "experimentation lane" open so the next useful tool surfaces through the front door instead of the side. We broke down the full matrix in when to consolidate AI tools and when not to.
What actually changes when you get sprawl under control
Three things, in the rough order companies notice them.
The bill gets smaller and more legible. Duplicate subscriptions disappear. Token usage concentrates in approved platforms where it's easier to negotiate. The CFO stops asking what "Miscellaneous SaaS" means.
The security picture sharpens. You know which tools touch which data. You know which agents run with which permissions. When a vendor has a breach (and they will), you can answer "were we exposed?" in an hour instead of a week.
The AI program actually starts producing results. MIT's State of AI in Business 2025 report, widely cited in the industry, found that 95% of enterprise GenAI pilots never reach production. Sprawl is one reason. When every team runs its own pilot, nothing gets the investment, data access, or engineering support it needs to cross the line from prototype to something the business actually depends on. A smaller, managed stack means a few pilots get real attention, and real attention is what gets pilots to production. We cover the cost side of this in the real cost of AI sprawl and the shadow AI side in shadow AI isn't the enemy.
Frequently Asked Questions
Is AI sprawl the same as shadow AI?
No. Shadow AI is one slice of sprawl, the unauthorized part. Sprawl includes sanctioned tools, vendor-embedded AI, and citizen-built agents that were all approved somewhere, just not together.
How much AI sprawl is normal?
Some is. Zapier's 2025 survey showed 28% of 1,000+ employee companies already run 10+ AI apps, so you're not alone. The question isn't tool count. It's whether you can answer three questions in under an hour: what tools, what data, who owns each one. If yes, the sprawl is manageable. If no, you have a real problem growing under the surface.
What's the first thing to do if we think we have AI sprawl?
Pull 90 days of expense reports and grep for AI tool names. Half a day of work, usually finds five to fifteen unauthorized tools, and costs nothing.
Who should own the AI sprawl problem, IT, security, or the business?
All three, with one person accountable. The working pattern is: business defines priorities, IT owns the inventory and the platform, security owns the guardrails. A single executive, usually a COO or CIO, owns the outcome. If three people own it, nobody owns it.
Sources
- MIT Initiative on the Digital Economy — State of AI in Business 2025
- McKinsey Quantum Black — The State of AI 2025
- Google Cloud — Announcing Gemini Enterprise and the AI agent ecosystem
- National Institute of Standards and Technology — NIST AI Risk Management Framework
- Stanford HAI — Artificial Intelligence Index Report 2025

Founder, Tech10
Doreid Haddad is the founder of Tech10. He has spent over a decade designing AI systems, marketing automation, and digital transformation strategies for global enterprise companies. His work focuses on building systems that actually work in production, not just in demos. Based in Rome.
Read more about Doreid


