Securing AI agents is one of the most pressing data security challenges facing Australian businesses right now. Without the right controls in place, AI tools can access far more data than they should, and most businesses have no clear way to see it happening.
Microsoft has recently introduced two tools that make it much more achievable to address this: Entra Agent ID and Microsoft Purview Data Security Investigations.
In this blog, I’ll explain what each tool does, why it matters for your business, and the practical steps you can take to get started.
The Problem With AI Agents Today
Over the last 12 to 18 months, many Australian businesses have experimented with Copilot, chatbots and small AI automations, often starting in a single team or function.
That has created real productivity gains, but it has also created a new kind of “shadow IT” called shadow AI, where no one can clearly answer basic questions like: which AI agents have access to our data, and who controls them?
This is not a theoretical problem. When AI tools grow without a clear owner or access policy, your risk exposure grows with them. Two recent Microsoft updates start to close that gap:
Microsoft Entra Agent ID, which gives AI agents a proper, managed identity in your Microsoft tenant (your business’s dedicated environment in the Microsoft cloud)
Microsoft Purview Data Security Investigations, now generally available, which uses AI to help you find and investigate data risks across your Microsoft 365 environment far more quickly
Together, these tools help you move from unmanaged AI experiments to secure, production-grade AI that your board and IT team can actually account for.
What is Microsoft Entra Agent ID?
To understand Entra Agent ID, it helps to start with Microsoft Entra itself. Entra is the identity platform that already manages your users, groups and applications.
Each AI agent (for example, a Copilot Studio bot, a Fabric data agent or a custom Azure AI assistant) gets its own dedicated identity that you can view and manage in the Entra admin centre
Conditional Access for Agent ID, now in public preview, lets you apply the same access controls to agents as you do for human users. That includes requiring compliant devices, blocking risky sign-ins and limiting access by location. This is built on Zero Trust principles, meaning every agent must prove it should have access every time, rather than being trusted by default
New capabilities in Microsoft Defender recognise and protect these agent identities, rather than treating them as generic background traffic
For business leaders, the key shift here is accountability. You can point to a specific agent, see exactly what it is allowed to do, and adjust its access without touching individual staff accounts.
Why Securing Agent Identity Matters
Understanding what Entra Agent ID does is one thing. Understanding why it is needed is another. In many businesses today, AI agents authenticate by borrowing a user identity or using a generic service account, which is a shared login that nobody really owns. That creates three common problems:
Visibility: Security and IT teams cannot easily see how many agents exist, what they are doing or which data they touch
Oversharing: If an agent runs under a highly privileged account, it may access far more data than it needs. Copilot-style tools inherit those permissions by default, so the risk scales quickly
Offboarding risk: When staff leave, agents tied to their identities may keep running unnoticed, or break in ways that are difficult to diagnose
Entra Agent ID addresses all three of these issues. It lists agents in the Entra admin centre so they can be discovered, audited and reviewed.
It allows you to assign least-privilege access to each agent, meaning each agent only gets access to exactly what it needs to do its job and nothing more. Additionally, it applies Conditional Access and threat protection policies that are specific to agents, rather than bundling them in with human sign-ins.
If you are serious about running Copilot Studio, Fabric data agents or Azure AI in a production environment, giving each agent a dedicated, governed identity should be a non-negotiable part of your setup.
Microsoft Purview Data Security Investigations
Knowing who has access is only part of the picture. You also need to know what is happening to your data, and that is where Microsoft Purview Data Security Investigations comes in.
Now generally available, it brings AI to the time-consuming work of investigating data security incidents.
Purview Data Security Investigations lets your security or compliance team:
Search across your Microsoft 365 environment, including email, Teams messages, SharePoint, OneDrive, and Copilot prompts and responses, to locate data relevant to an investigation
Use AI-powered deep content analysis to surface sensitive data, exposed credentials, potential fraud and other risks across large datasets, in more than 95 languages (per Microsoft’s February 2026 general availability announcement)
Visualise data flow graphs to see how sensitive information moved between users, locations and systems, helping you understand the full scope of an incident and which areas were affected
Reduce investigation times from weeks to hours by bringing search, triage and remediation workflows into a single tool (per Microsoft’s February 2026 general availability announcement)
For businesses without large security teams, this level of automation makes a real practical difference.
When something goes wrong, whether that is an accidental share, a suspected insider threat or a misconfigured AI agent, you need to understand the impact quickly so you can act.
How Entra Agent ID and Purview Work Together
Entra Agent ID and Purview Data Security Investigations solve different parts of the same problem, and they are strongest when used together.
Entra Agent ID controls who (that is, which agent) can access which systems and data, and under what conditions.
Purview Data Security Investigations helps you understand what happened to your data if an agent, or a person, behaves in a risky or unexpected way.
Microsoft Defender’s updated support for Entra Agent IDs adds a further protection layer by monitoring and responding to threats that involve agents, not just users or devices.
A practical example helps to bring this to life. Imagine you deploy an AI agent that reads customer emails, generates responses in Copilot and updates your CRM. With these tools in place, you can:
Give that agent a tightly scoped Entra Agent ID with only the permissions it genuinely needs
Apply Conditional Access so it can only operate from trusted networks and under low-risk conditions
Use Purview Data Security Investigations to quickly understand which messages, documents and prompts it accessed if you ever suspect misuse or misconfiguration
That is what secured AI agents in Microsoft 365 look like in practice: powerful, measurable, auditable and recoverable. With that picture in mind, here is how to get started.
Practical Steps for Australian SMB Leaders
You do not need a large IT team to begin using these capabilities. A staged approach works well, and you can build on each step as you go.
Step 1: Get clear on your AI agents
Start by asking your IT team or partner to map your current AI usage. That includes Copilot for Microsoft 365, Copilot Studio bots, Power Platform flows that call AI, Fabric data agents, and any Azure OpenAI or third-party agents in use.
From that list, identify which tools are acting autonomously on behalf of users or systems. Those are your agents, and they are the ones that need proper identity management.
Step 2: Introduce Entra Agent ID for new AI projects
For any new Copilot Studio, Fabric or Azure AI solution, plan to use Entra Agent ID from the start rather than reusing existing user accounts. Define least-privilege access for each agent, covering which apps, data sources and APIs it actually needs to do its job.
Work with your partner to pilot Conditional Access for agents, currently in public preview, and set up basic rules like blocking high-risk sign-ins and requiring compliant devices where appropriate.
Step 3: Plan a transition for existing agents
Where possible, move existing bots and automations away from generic service accounts to dedicated Agent IDs. As you go, clean up unused agents and old accounts. This step alone can meaningfully reduce your security risk, even before you turn on any new tools.
Step 4: Enable Purview Data Security Investigations
Confirm you have the right Microsoft Purview licensing, then enable Data Security Investigations in your tenant. Work with IT to set up role-based access so only authorised staff can run and view investigations.
Once it is live, run at least one practice scenario with your team. For example, walk through what you would do if a sensitive spreadsheet was shared accidentally, or if an agent accessed data it should not have.
Running a practice drill before a real incident gives your team hands-on familiarity with the tool when it counts.
Step 5: Update your policies, training and governance
Refresh your security and acceptable-use policies to cover AI agents explicitly. That means documenting who owns each agent, how they are approved, how access is reviewed, and how they are decommissioned when no longer needed.
Make sure your staff understand that Copilot and other AI tools inherit their access rights, so a sharing or labelling mistake can have a much larger impact when an agent is involved.
Include AI agents in your regular access review cycles, just as you would for apps, groups and privileged accounts.
How CG TECH Can Help You Secure AI Agents in Microsoft 365
Most businesses do not have spare internal capacity to track every new Microsoft release, design an AI security model and keep delivery moving at the same time. That is where working with a specialist partner pays off.
CG TECH works with Australian businesses across all of these areas:
AI and data security assessment: We review your current Copilot, Fabric, Power Platform and Azure AI usage and identify where agents already exist, including shadow AI that has grown without a formal approval process.
We also identify quick-win improvements such as right-sizing permissions and cleaning up old accounts.
Entra Agent ID strategy: We define the right setup for different agent types (customer-facing bots, internal assistants, data-processing agents) and configure Conditional Access policies that reflect Australian risk profiles, including remote work, regional offices and sector-specific needs.
Purview Data Security Investigations setup: We help you enable and tune the tool so it focuses on the data and scenarios that matter most in your business.
We also run practice exercises with your leaders, IT and security teams so everyone knows their role when an incident occurs.
Secure AI solutions, built right: We design and deliver Copilot Studio, Fabric and Azure AI solutions that use Agent IDs, Purview and Defender from day one, rather than retrofitting security later.
We connect the technical work to clear business outcomes, including faster response times, better customer service and more accurate reporting, so you can explain the value to your board and staff.
Build a Secure AI Setup, Step by Step
AI is becoming more deeply embedded in everyday tools like Microsoft 365, Dynamics and your line-of-business applications. The question is not whether you will use AI agents. It is whether you will use them safely and deliberately.
Microsoft Entra Agent ID gives you a way to treat AI agents as proper, governed identities. Microsoft Purview Data Security Investigations gives you a way to see what is happening to your data and respond quickly when something goes wrong.
Start with visibility, introduce Agent IDs for new projects, enable data investigations, and update your governance. That structured approach lets you get the real benefits of AI without losing control.
About the Author
Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.
With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.
He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.
Securing AI agents is one of the most pressing data security challenges facing Australian businesses right now. Without the right controls in place, AI tools can access far more data than they should, and most businesses have no clear way to see it happening.
Microsoft has recently introduced two tools that make it much more achievable to address this: Entra Agent ID and Microsoft Purview Data Security Investigations.
In this blog, I’ll explain what each tool does, why it matters for your business, and the practical steps you can take to get started.
The Problem With AI Agents Today
Over the last 12 to 18 months, many Australian businesses have experimented with Copilot, chatbots and small AI automations, often starting in a single team or function.
That has created real productivity gains, but it has also created a new kind of “shadow IT” called shadow AI, where no one can clearly answer basic questions like: which AI agents have access to our data, and who controls them?
This is not a theoretical problem. When AI tools grow without a clear owner or access policy, your risk exposure grows with them. Two recent Microsoft updates start to close that gap:
Together, these tools help you move from unmanaged AI experiments to secure, production-grade AI that your board and IT team can actually account for.
What is Microsoft Entra Agent ID?
To understand Entra Agent ID, it helps to start with Microsoft Entra itself. Entra is the identity platform that already manages your users, groups and applications.
Think of it as the system that decides who can log in, what they can access, and under what conditions. Entra Agent ID extends that same platform so AI agents become proper, managed identities too, rather than scripts running silently behind someone’s account.
With Entra Agent ID in place:
For business leaders, the key shift here is accountability. You can point to a specific agent, see exactly what it is allowed to do, and adjust its access without touching individual staff accounts.
Why Securing Agent Identity Matters
Understanding what Entra Agent ID does is one thing. Understanding why it is needed is another. In many businesses today, AI agents authenticate by borrowing a user identity or using a generic service account, which is a shared login that nobody really owns. That creates three common problems:
Entra Agent ID addresses all three of these issues. It lists agents in the Entra admin centre so they can be discovered, audited and reviewed.
It allows you to assign least-privilege access to each agent, meaning each agent only gets access to exactly what it needs to do its job and nothing more. Additionally, it applies Conditional Access and threat protection policies that are specific to agents, rather than bundling them in with human sign-ins.
If you are serious about running Copilot Studio, Fabric data agents or Azure AI in a production environment, giving each agent a dedicated, governed identity should be a non-negotiable part of your setup.
Microsoft Purview Data Security Investigations
Knowing who has access is only part of the picture. You also need to know what is happening to your data, and that is where Microsoft Purview Data Security Investigations comes in.
Now generally available, it brings AI to the time-consuming work of investigating data security incidents.
Purview Data Security Investigations lets your security or compliance team:
For businesses without large security teams, this level of automation makes a real practical difference.
When something goes wrong, whether that is an accidental share, a suspected insider threat or a misconfigured AI agent, you need to understand the impact quickly so you can act.
How Entra Agent ID and Purview Work Together
Entra Agent ID and Purview Data Security Investigations solve different parts of the same problem, and they are strongest when used together.
Entra Agent ID controls who (that is, which agent) can access which systems and data, and under what conditions.
Purview Data Security Investigations helps you understand what happened to your data if an agent, or a person, behaves in a risky or unexpected way.
Microsoft Defender’s updated support for Entra Agent IDs adds a further protection layer by monitoring and responding to threats that involve agents, not just users or devices.
A practical example helps to bring this to life. Imagine you deploy an AI agent that reads customer emails, generates responses in Copilot and updates your CRM. With these tools in place, you can:
That is what secured AI agents in Microsoft 365 look like in practice: powerful, measurable, auditable and recoverable. With that picture in mind, here is how to get started.
Practical Steps for Australian SMB Leaders
You do not need a large IT team to begin using these capabilities. A staged approach works well, and you can build on each step as you go.
Step 1: Get clear on your AI agents
Start by asking your IT team or partner to map your current AI usage. That includes Copilot for Microsoft 365, Copilot Studio bots, Power Platform flows that call AI, Fabric data agents, and any Azure OpenAI or third-party agents in use.
From that list, identify which tools are acting autonomously on behalf of users or systems. Those are your agents, and they are the ones that need proper identity management.
Step 2: Introduce Entra Agent ID for new AI projects
For any new Copilot Studio, Fabric or Azure AI solution, plan to use Entra Agent ID from the start rather than reusing existing user accounts. Define least-privilege access for each agent, covering which apps, data sources and APIs it actually needs to do its job.
Work with your partner to pilot Conditional Access for agents, currently in public preview, and set up basic rules like blocking high-risk sign-ins and requiring compliant devices where appropriate.
Step 3: Plan a transition for existing agents
Where possible, move existing bots and automations away from generic service accounts to dedicated Agent IDs. As you go, clean up unused agents and old accounts. This step alone can meaningfully reduce your security risk, even before you turn on any new tools.
Step 4: Enable Purview Data Security Investigations
Confirm you have the right Microsoft Purview licensing, then enable Data Security Investigations in your tenant. Work with IT to set up role-based access so only authorised staff can run and view investigations.
Once it is live, run at least one practice scenario with your team. For example, walk through what you would do if a sensitive spreadsheet was shared accidentally, or if an agent accessed data it should not have.
Running a practice drill before a real incident gives your team hands-on familiarity with the tool when it counts.
Step 5: Update your policies, training and governance
Refresh your security and acceptable-use policies to cover AI agents explicitly. That means documenting who owns each agent, how they are approved, how access is reviewed, and how they are decommissioned when no longer needed.
Make sure your staff understand that Copilot and other AI tools inherit their access rights, so a sharing or labelling mistake can have a much larger impact when an agent is involved.
Include AI agents in your regular access review cycles, just as you would for apps, groups and privileged accounts.
How CG TECH Can Help You Secure AI Agents in Microsoft 365
Most businesses do not have spare internal capacity to track every new Microsoft release, design an AI security model and keep delivery moving at the same time. That is where working with a specialist partner pays off.
CG TECH works with Australian businesses across all of these areas:
AI and data security assessment: We review your current Copilot, Fabric, Power Platform and Azure AI usage and identify where agents already exist, including shadow AI that has grown without a formal approval process.
We also identify quick-win improvements such as right-sizing permissions and cleaning up old accounts.
Entra Agent ID strategy: We define the right setup for different agent types (customer-facing bots, internal assistants, data-processing agents) and configure Conditional Access policies that reflect Australian risk profiles, including remote work, regional offices and sector-specific needs.
Purview Data Security Investigations setup: We help you enable and tune the tool so it focuses on the data and scenarios that matter most in your business.
We also run practice exercises with your leaders, IT and security teams so everyone knows their role when an incident occurs.
Secure AI solutions, built right: We design and deliver Copilot Studio, Fabric and Azure AI solutions that use Agent IDs, Purview and Defender from day one, rather than retrofitting security later.
We connect the technical work to clear business outcomes, including faster response times, better customer service and more accurate reporting, so you can explain the value to your board and staff.
Build a Secure AI Setup, Step by Step
AI is becoming more deeply embedded in everyday tools like Microsoft 365, Dynamics and your line-of-business applications. The question is not whether you will use AI agents. It is whether you will use them safely and deliberately.
Microsoft Entra Agent ID gives you a way to treat AI agents as proper, governed identities. Microsoft Purview Data Security Investigations gives you a way to see what is happening to your data and respond quickly when something goes wrong.
Start with visibility, introduce Agent IDs for new projects, enable data investigations, and update your governance. That structured approach lets you get the real benefits of AI without losing control.
About the Author
Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.
With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.
He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.
Sources
Recent Posts
Popular Categories
Archives