As artificial intelligence (AI) becomes embedded in everyday work via Microsoft 365 Copilot and other platforms, I see two clear trends. On one hand, organisations are excited about the productivity gains.
On the other, there is a growing risk that those gains could be eroded by unintended consequences, namely the rapid proliferation of AI agents without oversight, what we call agent sprawl.
Whether you are in government, education, not-for-profit, retail, engineering or healthcare, this is an issue worth addressing.
At CG TECH, we believe the key is enabling safe innovation, not blocking it, and helping our clients unlock the full potential of modern work with confidence.
What Is Agent Sprawl?
Historically, we have been familiar with shadow IT, where unsanctioned apps and services appear outside of formal IT control. Agent sprawl is the next iteration of this problem in the age of AI.
It happens when multiple AI agents, or “bots”, are created in platforms like Copilot, Power Platform or Teams by business users, often with limited visibility and little governance.
These agents might automate routine tasks, integrate with data sources, or support teams in chat environments. But when they multiply across an organisation without oversight, the risk profile changes.
According to Rencore, “Agent sprawl introduces risks across three critical dimensions: security, compliance and cost.” Microsoft also notes that while the business demand is real, “unauthorised AI presents deeper behavioural challenges beyond tools alone,” as shared in a Microsoft Tech Community blog.
Why It Matters
Left unchecked, agent sprawl can impact an organisation in three meaningful ways.
1. Data Exposure and Compliance Risk
Agents may access sensitive files, bypass retention and audit controls, or introduce unmanaged workflows that fall outside existing policies. Microsoft’s guidance highlights how this behaviour can lead to oversharing and data leakage.
2. Operational Complexity and Cost Creep
Each agent may consume compute, storage or licensing. Without central oversight, duplication, orphaned agents and inefficiencies can appear, leading to unnecessary costs as noted by Rencore.
3. Reduced User Trust and Stalled Innovation
If users feel IT is blocking their tools, or if business units create their own “rogue” agents to get the job done, the promise of Copilot-class AI may never scale as intended. This concern was outlined by eShare.
That is why the challenge is not just technical. It is also cultural and procedural. We must balance business agility with responsible governance.
Start With Visibility and Discovery
Before you scale your agent strategy, you need to know what already exists. I recommend starting with an agent inventory and usage assessment.
Key Actions
Identify all agents currently deployed, including custom, third-party and built-in.
Understand data sources, permissions, connectors and user access for each agent.
Map owners and usage patterns. Are any legacy or orphaned?
Review cost, licensing and metadata to identify duplication.
Once you have that baseline, you can prioritise high-risk agents that handle sensitive data, external sharing or critical systems for immediate review.
Outcome: When you see what already exists, you move from reactive defence to proactive management.
Building a Governance Framework That Works
A governance framework does not have to be complex. Our approach at CG TECH is pragmatic and built around co-creation with business units.
Area
Key Actions
Business Outcome
Define roles and responsibilities
Establish who approves agent creation, publishes updates, and decommissions agents.
Clear ownership and fewer orphaned resources.
Create approval workflows
Use templates and naming conventions to ensure agents are catalogued and approved before deployment.
Faster innovation with built-in guardrails.
Enforce lifecycle management
Agents should have defined review periods, retirement triggers and audit logs.
Reduced risk of legacy or unused agents accumulating.
Monitor and report
Use dashboards to track agent usage, redundancy, cost and data exposure.
Real-time visibility and better return on investment.
When you set up governance like this, you shift from “IT blocking business” to “IT enabling business” with structure and trust.
Technical Controls That Support Governance
Governance only works when supported by the right technical controls. Here are several that I have seen deliver strong results.
Data Loss Prevention (DLP) for Agents
Apply DLP policies to restrict agents from transferring sensitive data to unmanaged connectors or external services. Microsoft’s playbook recommends using sensitivity labels and monitoring policies to detect risks in real time.
Ensure agents run under controlled identities. Enforce conditional access and endpoint compliance for devices interacting with agents. This creates a consistent layer of control without slowing innovation.
Outcome: When technical policies reinforce governance, compliance becomes part of everyday operations.
Educating Makers and Business Units
People build agents, so investing in education is essential. Empowering makers helps reduce shadow AI while encouraging responsible innovation.
What to Include in Training
Explain the business benefits of agents and how they align with goals such as productivity, automation and efficiency.
Train on approved connectors, access rules, naming standards and the publishing process.
Share examples of both success and risk, such as unattended sharing or duplicate agents.
Provide “office hours” or a centre of excellence where makers can ask questions.
When teams feel supported, they are more likely to follow approved paths and less likely to create risk.
Outcome: A culture of responsible AI where innovation happens safely.
Scaling With Confidence
Once visibility, governance and controls are in place, you can scale AI agent adoption confidently.
Steps to Scale Responsibly
Launch an agent factory: Pre-approve templates and provide fast lanes for low-risk agents, with deeper review for high-risk ones.
Encourage reuse: Catalogue agents by domain or department to prevent duplication.
Monitor adoption: Track agents by risk tier, reduction in unsupported tools and cost savings achieved.
Review and retire: Schedule quarterly reviews and remove unused agents to maintain efficiency.
Scaling this way ensures AI remains a strategic advantage, not a security concern.
Final Thoughts
The journey to responsible AI does not end with deploying a few agents. It begins with understanding how your organisation is adopting them, intentionally or otherwise.
At CG TECH, we believe that by combining governance, technical controls and education, organisations can move from uncertainty to confidence.
By acting now, your organisation can embrace AI agents not as a source of risk but as a long-term advantage. From insights to execution, we help you move forward with trusted, practical and secure AI adoption.
Unchecked agent sprawl is not inevitable. With the right approach, you can turn the promise of Copilot-class AI into measurable value.
About the Author
Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.
With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.
He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.
As artificial intelligence (AI) becomes embedded in everyday work via Microsoft 365 Copilot and other platforms, I see two clear trends. On one hand, organisations are excited about the productivity gains.
On the other, there is a growing risk that those gains could be eroded by unintended consequences, namely the rapid proliferation of AI agents without oversight, what we call agent sprawl.
Whether you are in government, education, not-for-profit, retail, engineering or healthcare, this is an issue worth addressing.
At CG TECH, we believe the key is enabling safe innovation, not blocking it, and helping our clients unlock the full potential of modern work with confidence.
What Is Agent Sprawl?
Historically, we have been familiar with shadow IT, where unsanctioned apps and services appear outside of formal IT control. Agent sprawl is the next iteration of this problem in the age of AI.
It happens when multiple AI agents, or “bots”, are created in platforms like Copilot, Power Platform or Teams by business users, often with limited visibility and little governance.
These agents might automate routine tasks, integrate with data sources, or support teams in chat environments. But when they multiply across an organisation without oversight, the risk profile changes.
According to Rencore, “Agent sprawl introduces risks across three critical dimensions: security, compliance and cost.” Microsoft also notes that while the business demand is real, “unauthorised AI presents deeper behavioural challenges beyond tools alone,” as shared in a Microsoft Tech Community blog.
Why It Matters
Left unchecked, agent sprawl can impact an organisation in three meaningful ways.
1. Data Exposure and Compliance Risk
Agents may access sensitive files, bypass retention and audit controls, or introduce unmanaged workflows that fall outside existing policies. Microsoft’s guidance highlights how this behaviour can lead to oversharing and data leakage.
2. Operational Complexity and Cost Creep
Each agent may consume compute, storage or licensing. Without central oversight, duplication, orphaned agents and inefficiencies can appear, leading to unnecessary costs as noted by Rencore.
3. Reduced User Trust and Stalled Innovation
If users feel IT is blocking their tools, or if business units create their own “rogue” agents to get the job done, the promise of Copilot-class AI may never scale as intended. This concern was outlined by eShare.
That is why the challenge is not just technical. It is also cultural and procedural. We must balance business agility with responsible governance.
Start With Visibility and Discovery
Before you scale your agent strategy, you need to know what already exists. I recommend starting with an agent inventory and usage assessment.
Key Actions
Once you have that baseline, you can prioritise high-risk agents that handle sensitive data, external sharing or critical systems for immediate review.
Outcome: When you see what already exists, you move from reactive defence to proactive management.
Building a Governance Framework That Works
A governance framework does not have to be complex. Our approach at CG TECH is pragmatic and built around co-creation with business units.
For further reading, Microsoft’s Agent Security Playbook for Copilot Studio provides a useful reference.
When you set up governance like this, you shift from “IT blocking business” to “IT enabling business” with structure and trust.
Technical Controls That Support Governance
Governance only works when supported by the right technical controls. Here are several that I have seen deliver strong results.
Data Loss Prevention (DLP) for Agents
Apply DLP policies to restrict agents from transferring sensitive data to unmanaged connectors or external services. Microsoft’s playbook recommends using sensitivity labels and monitoring policies to detect risks in real time.
Microsoft Purview for Oversharing Detection
Use Microsoft Purview Data Security Posture Management (DSPM) for AI to identify oversharing, sensitive files used in agents or improper sharing links.
Role-Based Access and Endpoint Compliance
Ensure agents run under controlled identities. Enforce conditional access and endpoint compliance for devices interacting with agents. This creates a consistent layer of control without slowing innovation.
Outcome: When technical policies reinforce governance, compliance becomes part of everyday operations.
Educating Makers and Business Units
People build agents, so investing in education is essential. Empowering makers helps reduce shadow AI while encouraging responsible innovation.
What to Include in Training
When teams feel supported, they are more likely to follow approved paths and less likely to create risk.
Outcome: A culture of responsible AI where innovation happens safely.
Scaling With Confidence
Once visibility, governance and controls are in place, you can scale AI agent adoption confidently.
Steps to Scale Responsibly
Scaling this way ensures AI remains a strategic advantage, not a security concern.
Final Thoughts
The journey to responsible AI does not end with deploying a few agents. It begins with understanding how your organisation is adopting them, intentionally or otherwise.
At CG TECH, we believe that by combining governance, technical controls and education, organisations can move from uncertainty to confidence.
By acting now, your organisation can embrace AI agents not as a source of risk but as a long-term advantage. From insights to execution, we help you move forward with trusted, practical and secure AI adoption.
Unchecked agent sprawl is not inevitable. With the right approach, you can turn the promise of Copilot-class AI into measurable value.
About the Author
Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.
With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.
He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.
Sources
Recent Posts
Popular Categories
Archives