Microsoft has been selling its AI assistant — Microsoft Copilot — as the future of work. It is built into Windows 11. It is inside Word, Excel, PowerPoint, and Teams. Microsoft has spent billions promoting it as your intelligent work companion that will make you more productive.
And then someone actually read the legal fine print.
Hidden inside Microsoft Copilot's official Terms of Use — updated in October 2025 and noticed by the internet in April 2026 — is a sentence that stopped everyone in their tracks.
"Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
The same company that is charging businesses money to use Copilot for serious work just admitted in writing that it is entertainment only. And you use it at your own risk.
Let us break down exactly what is happening here and what it means for you.
What Exactly Did Microsoft Say?
The exact words from Microsoft Copilot's official Terms of Use are:
"Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
This appears under a section of the document written in bold capital letters titled "IMPORTANT DISCLOSURES & WARNINGS."
It does not get more official than that.
The same document also says:
- Microsoft makes no warranty or representation of any kind about Copilot
- Users are solely responsible if they publish or share anything Copilot produces
- Copilot's responses may infringe on someone else's copyright, trademarks, or privacy rights — and Microsoft takes no responsibility for that
- If anyone sues Microsoft because of something you did with Copilot — you have to pay Microsoft's legal costs
This is not buried in tiny text at the bottom of a page. This is in the official, formal Terms of Use that every user agrees to when they start using Copilot.
The Big Contradiction — Two Very Different Microsoft Messages
Here is why people are so shocked by this. Compare what Microsoft says in its advertising versus what it says in its legal documents.
What Microsoft Says in Ads
- "Copilot is your intelligent companion for work"
- "Transform the way you work with AI"
- "Copilot helps you write emails, summarise documents, and get things done faster"
- "The future of productivity is here"
Microsoft sells Copilot to businesses for significant money. It has 15 million paying subscribers using it for real work tasks — writing reports, analysing data, drafting legal documents, summarising meetings.
What Microsoft Says in Legal Documents
- "For entertainment purposes only"
- "Can make mistakes"
- "May not work as intended"
- "Don't rely on it for important advice"
- "Use at your own risk"
These two messages are completely opposite to each other. The advertising says "use this for serious work." The legal document says "this is entertainment — don't use it for anything serious."
Why Did Microsoft Write This?
There are two reasons — one simple and one more serious.
Reason 1 — Lawyers Protecting the Company
When something goes wrong with AI — and things do go wrong — the first question everyone asks is "who is responsible?"
By writing "entertainment purposes only" and "use at your own risk" in the legal document, Microsoft is trying to make sure the answer is always "you are responsible, not us."
If Copilot gives you wrong medical advice and you follow it — Microsoft's terms say it is your fault for relying on entertainment software.
If Copilot writes something that infringes on someone's copyright — Microsoft's terms say you are solely responsible for sharing it.
This is classic legal protection. It is the company saying: we know this can go wrong, and when it does, we are not paying for it.
A Microsoft spokesperson tried to soften this by telling journalists the phrase is "legacy language" that is "no longer reflective of how Copilot is used today" and will be updated. But they gave no timeline for when they will change it — and as of today the terms still say exactly this.
Reason 2 — Because AI Really Does Make Mistakes
The second reason is more honest. AI really does make things up. And Copilot has done it publicly, embarrassingly, and seriously.
Here are real examples of Copilot going wrong:
The German Journalist Disaster (2024) Copilot falsely identified a German court reporter named Martin Bernklau as a convicted child abuser and fraudster — and provided his home address. This man had done absolutely nothing wrong. He was a journalist who covered criminal cases. Copilot confused the cases he reported on with crimes he committed. Microsoft had to block all searches about this person after a formal complaint.
Football Violence False Claims (January 2026) Copilot generated completely false claims about football-related violence, triggering widespread coverage of its reliability problems.
Amazon AWS Outages At Amazon, there were at least two AWS outages where engineers allowed an AI coding tool to make changes without sufficient human oversight. The company called these "user error" — but it shows what happens when people trust AI too much.
These are not minor errors. These are serious mistakes that affected real people. The "entertainment only" language starts to make a lot more sense when you look at this track record.
How Popular is Microsoft Copilot Really?
Here is something interesting. Despite all the money Microsoft has spent promoting Copilot, the actual usage numbers tell a very different story.
Microsoft has 450 million paid Microsoft 365 seats — that is 450 million people using Word, Excel, and PowerPoint. But only 15 million of them pay for Copilot. That is just 3.3%.
In other words — 97% of Microsoft's existing customers looked at Copilot and said no thanks.
And of the people who did try Copilot and then stopped, 44% said the main reason was that they did not trust its answers.
When a company's own customers stop using a product because they don't trust it — and then the company's own legal documents call it "entertainment only" — that is a very honest picture of where things stand.
Are Other AI Companies Any Different?
Microsoft is not alone in these kinds of disclaimers. Every major AI company has similar language hidden in their terms.
| Company | What Their Terms Say |
|---|---|
| Microsoft Copilot | "Entertainment purposes only. Use at your own risk" |
| OpenAI ChatGPT | "Don't rely on outputs as sole source of truth or factual information" |
| Google Gemini | "Don't rely on Services for medical, legal, financial advice" |
| xAI Grok | "AI is probabilistic in nature and may produce incorrect output" |
The difference is that none of the others used the phrase "entertainment purposes only." That specific wording is what shocked people. It sounds like Microsoft is describing a video game, not a tool they are charging companies money to use for serious business decisions.
What Should You Actually Do With Microsoft Copilot?
Despite all of this, Microsoft Copilot is not useless. It is genuinely helpful for many tasks. The problem is that people sometimes trust it too much.
Here is a simple guide for using Copilot smartly:
Use Copilot for:
- Getting a first draft of an email or document — then edit it yourself
- Brainstorming ideas — then verify and develop them
- Summarising long documents — then check the summary against the original
- Generating options and suggestions — then make the final decision yourself
Never use Copilot for:
- Medical decisions or health advice
- Legal advice or anything you might sign
- Financial decisions with real money
- Any information you will share publicly without checking
- Anything where being wrong could seriously hurt you or others
The golden rule is simple — Copilot is a starting point, never an endpoint. Whatever it produces, check it, verify it, and apply your own judgment before doing anything with it.
The Bigger Picture — Why This Matters
This story is not really about Microsoft Copilot specifically. It is about a pattern across the entire AI industry.
Every major AI company is doing two things simultaneously:
Thing 1 — Aggressively marketing AI as transformative, essential, and revolutionary. Spending billions on advertising. Selling to businesses. Building AI into every product.
Thing 2 — Quietly admitting in legal documents that the AI makes mistakes, cannot be trusted for important decisions, and you use it at your own risk.
This gap between the marketing message and the legal reality is something every person who uses AI tools needs to understand.
AI is a genuinely powerful and useful tool. But it is not magic. It is not always right. It does not understand context the way a human does. It sometimes confidently makes things up — this is called hallucination — and presents them as facts.
Microsoft's "entertainment only" disclaimer is embarrassing for the company. But in a strange way it is also the most honest thing an AI company has said publicly in a long time.
What Happens Next?
Microsoft has said they will update the language in the terms. The "entertainment only" phrase will likely disappear soon and be replaced with something that sounds better but means roughly the same thing.
But the underlying reality will not change just because the words change.
AI assistants like Microsoft Copilot are genuinely useful tools that make mistakes and should not be blindly trusted for important decisions. That was true before this disclaimer went viral. It will be true after Microsoft rewrites the legal language.
The most important thing you can take from this story is not that Copilot is bad. It is that any AI tool — Copilot, ChatGPT, Claude, Gemini — is a tool that helps you think and work faster. Not a replacement for thinking and working carefully.
Use it. Just do not trust it blindly.
That is true whether Microsoft says so in their terms or not.