loader image
CG TECH blog banner: office team working at computers with Sydney Harbour Bridge in background.

It’s fair to say we’re creating more AI-generated content than ever before, and it’s getting harder to tell fact from fiction.

There’s no label, no marker, no trail to show whether a video, a recording, or a presentation was made by a person or shaped by AI.

For most businesses, that’s fine today. But it won’t stay that way. Regulators, customers, and industry bodies are paying close attention to how AI-generated content is used and disclosed.

When expectations become binding, businesses that haven’t built transparency into their content processes will find themselves scrambling to catch up.

Beyond regulation, there’s a trust issue. If AI-generated content can’t be identified, it can’t be audited, disputed, or reviewed after the fact, and the reputational and legal exposure is real.


What Microsoft Is Doing About It

Microsoft is now rolling out an AI watermarking policy for Microsoft 365. Tracked in the Microsoft 365 message centre as MC1221451, it lets organisations add a visual watermark to AI-generated or AI-altered video and an audio watermark to AI-generated audio.

General availability is targeted for mid-April 2026. It’s controlled at the admin level via the Cloud Policy service, and it’s off by default, so organisations need to actively choose to turn it on and define where it applies.


Why This Matters to Me Personally

I’ve spent a lot of time helping businesses think through how to adopt AI safely and responsibly. One of the gaps we keep running into isn’t technology. It’s traceability.

Teams are using Copilot to create great content, but beyond the traditional signs, there’s no consistent way to tell which content was AI-generated, which was AI-assisted, and which was entirely human-made.

That gap creates risk for boards, risk teams, marketing departments, and legal teams, and it tends to feel invisible until something goes wrong.

This policy is a practical, low-friction way to start closing that gap.


What’s Covered in This Blog

In the rest of this post, I’ll walk through:

  • How the watermark policy works and what it covers
  • The governance benefits for boards, risk teams, and marketing departments
  • What it doesn’t cover, and where you still need to fill in the gaps
  • Practical steps to review, plan, and implement the policy in your environment
  • How CG TECH can help if you’d like a hand getting set up

How the AI Watermarking Policy Works in Microsoft 365

What It Covers

The policy is called “Add watermarks to content generated or altered by using AI in Microsoft 365”. When enabled by an admin through the Cloud Policy service, it automatically applies a watermark whenever AI generates or significantly alters content in supported apps.

In practice, this covers two types of content:

  • Video: A visual watermark is added to AI-generated or AI-altered video content created in Microsoft 365 apps
  • Audio: An audio watermark is added to AI-generated audio, for example a narration or recording generated by Copilot

The watermark is applied automatically in the background. End users don’t need to do anything, and it doesn’t interrupt how your teams work.


What It Doesn’t Cover

It’s worth being clear about what this policy doesn’t do, because I’ve already seen some misunderstanding in the market.

The watermarking policy does not cover images. Image watermarking in Microsoft 365 is handled through a separate user-level privacy setting, not this admin policy. It also doesn’t cover content created outside Microsoft 365, or AI-generated text in documents.

The scope is specifically video and audio content created or altered by Microsoft 365 AI tools.

That means this is one piece of a broader AI content governance approach, not the whole picture. We’ll come back to that shortly.


How the Metadata Works

Alongside the visible watermark, Microsoft 365 records metadata with supported content, including which AI model and app was used to generate it.

Think of it as a paper trail sitting behind the watermark: when a question comes up about a particular piece of content, such as whether it was AI-generated, which tool created it, or when it was produced, there’s a clear answer available in the system.

For compliance teams and risk functions, that’s a meaningful shift from the current situation, where most businesses have no reliable way to trace AI-generated content after the fact.


The Business Case for Turning This On

For Boards and Risk Teams

AI transparency is moving up the board agenda quickly. In our governance work with Australian businesses, we’re seeing more boards ask their leadership teams to report on how AI is being used, and that includes how AI-generated content is being disclosed internally and externally.

For a deeper look at what a solid AI governance approach looks like, our blog on Unified AI Governance covers the full framework.

Enabling the watermarking policy gives boards a clear, auditable signal: when AI touches a video or audio file in Microsoft 365, it’s marked. That’s a straightforward answer to a growing governance question.


Australia’s Privacy Act 1988 creates clear obligations around how information is collected, stored, and shared.

As AI-generated content becomes more common in client communications, legal filings, and HR processes, being able to demonstrate that AI content was properly identified and disclosed will matter.

The watermarking policy, used alongside Microsoft Purview data classification and retention policies, builds an evidence base you can actually point to.

For businesses already working through their AI governance controls, our practical guide to AI governance outlines how Purview and DLP fit into the broader picture.


For Marketing and Communications Teams

AI is already part of most marketing workflows (including ours). From video editing, content generation, voiceover production. The reality is, teams are moving faster because of it.

But with speed comes the question of disclosure, especially in regulated industries like financial services, healthcare, and professional services, where standards around content accuracy and authorship are higher.

A watermark policy gives marketing teams a clear rule to work with: if Microsoft 365 AI touched this content, it’s automatically marked.

That removes the grey area and means teams don’t have to make individual judgement calls about whether a given piece of content needs to be disclosed.


What This Policy Doesn’t Replace

You Still Need a Broader AI Content Governance Approach

The watermarking policy is an important step, but it isn’t a standalone solution. It only covers video and audio content generated by Microsoft 365 tools. Your broader AI content governance should also address:

  • Text content: How do you identify and review AI-generated text in documents, proposals, and client communications?
  • Third-party tools: If your teams are using AI tools outside Microsoft 365, those won’t be captured by this policy
  • Approval workflows: Who reviews AI-generated content before it’s sent externally, and what sign-off is required?
  • Training: Do your people understand what the watermark means, and how to explain it to clients or stakeholders if asked?

These are the kinds of gaps we work through with clients as part of a proper AI governance review.

We covered the most common AI risk scenarios we see across Australian businesses in our earlier post on securing AI agents in Microsoft 365. Many of the same principles apply here.


What About AI Agents Generating Content Automatically?

This is a question that’s coming up more often as businesses start deploying AI agents that create content as part of a workflow. For example, an agent that automatically generates a weekly operations video summary.

The watermarking policy will apply to that content too, provided it’s generated within Microsoft 365. But the governance question goes deeper: who’s accountable for reviewing automatically generated content before it’s distributed?

If you’re moving toward agentic workflows, that’s a conversation worth having before you scale. We looked at this directly in our blog on the Agentic System of Work, if you’d like to explore that side of things further.


Practical Steps to Get Ready Before Mid-April

Step 1: Confirm Your Licensing and Admin Access

The watermarking policy is configured through the Microsoft 365 Cloud Policy service, which is available to commercial Microsoft 365 tenants. Check with your IT team or Microsoft partner that you have access and that the right admin accounts are set up to manage Cloud Policy.

Step 2: Map Where AI-Generated Video and Audio Is Being Produced

Before you enable the policy, take stock of where in your business AI is already being used to create or alter video and audio content. Common examples include:

  • Copilot-generated meeting recordings or summaries in Teams
  • AI-altered training videos created using Microsoft 365 apps
  • AI-generated voiceovers or narrations in presentations

Understanding your use cases first will help you configure the policy to match your actual risk areas, rather than applying a blanket setting without thinking it through.

Step 3: Decide Where Watermarking Is Mandatory Versus Optional

Work with your compliance, legal, and communications teams to agree on scope.

For most businesses, the right starting point is this: any AI-generated video or audio content that leaves your internal environment, whether to clients, partners, regulators, or the public, should carry a watermark.

Internal content may carry lower risk, but a consistent approach is easier to explain and audit.

Step 4: Update Your AI Acceptable-Use Policy

If you don’t already have an AI acceptable-use policy, now’s a good time to put one in place. If you do, add a section that covers AI-generated content specifically: what must be watermarked, how to disclose AI content to external parties, and how the metadata record should be maintained.

Step 5: Brief Your Teams Before Go-Live

When the policy is enabled, your marketing, communications, legal, and operations teams need to understand what the watermark is and what it means.

A short briefing, even just a one-page explainer, will reduce confusion and make sure teams can answer client questions confidently.


How CG TECH Can Help

We work with ANZ businesses across all of this: reviewing your current AI content production, helping you configure the Cloud Policy service, and connecting the watermarking policy into your broader Microsoft Purview and compliance setup.

If you’d like to get ahead of the mid-April general availability date, or if you’d like a broader review of your AI governance controls, our team is ready to help.

CG TECH CTA Banner.

About the Author

Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.

With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.

He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.

Connect with Carlos Garcia, Founder and Managing Director of CG TECH, on LinkedIn.

Sources

  1. Microsoft Support — Include a watermark when content from Microsoft 365 is AI-generated
  2. Microsoft Learn — Add watermarks to content generated or altered by using AI in Microsoft 365
  3. MWPro — Microsoft 365 AI Watermark Policy Update (MC1221451.3)
  4. Security Nebula AI — New AI Transparency Policy in Microsoft 365