Quick answer

The biggest risk with AI at work is not using it — it is using it carelessly. Pasting confidential client data into a public AI tool is the most common mistake. Know your company's AI policy, never share data that should stay internal, and always disclose AI use when your company or profession requires it.

Most large companies now have an AI policy. Many professionals have contractual or regulatory obligations around data confidentiality. And yet, millions of employees are pasting sensitive documents into public AI tools every day — often without realising the risk. Here is how to use AI productively and safely.

The most important rule: know what data you are sharing

When you use the free tier of ChatGPT, Claude, or most public AI tools, the conversations you have may be used to train future models (depending on your settings). This means anything you paste in — client names, financial figures, internal strategies, personal data — could potentially be exposed.

  • Do NOT paste: client data, patient records, financial information, unreleased product plans, personal employee data, legal documents under NDA
  • Generally safe: publicly available information, your own writing for editing, general questions, anonymised data, code that does not contain secrets or proprietary algorithms

Check your company's AI policy

Many companies have an official AI policy — or are in the process of creating one. Find out if yours does. Common policies include: a list of approved tools, rules about what data can be shared, requirements to disclose AI-generated content, and requirements for human review before AI outputs are used.

If your company does not have a policy yet, operate conservatively: treat AI tools like external consultants — share only what you would share with someone outside the company.

Safe AI tools for work (with data protection)

  • Microsoft Copilot (M365) — Runs inside your company's Microsoft 365 tenant. Data stays within your organisation's environment.
  • ChatGPT Enterprise / Teams — OpenAI's enterprise version does not use your data for training and includes admin controls.
  • Claude for Enterprise (Anthropic) — Similar protections, with strong privacy commitments.
  • Google Workspace Gemini — Integrated with Google Workspace with enterprise data protections.

Disclosure: when you must say you used AI

Some professions require disclosure. Lawyers in many jurisdictions must disclose AI use in filings. Journalists at major publications have AI disclosure policies. Academic work has plagiarism and AI policies. Know your professional standards — ignorance is not a defence.

Practical habits for safe AI use at work

  • Always review AI output before using it — never send AI-generated content unread
  • Anonymise data before pasting it into public AI tools (replace names with "Client A", "Employee 1")
  • Use enterprise versions of AI tools when available — they have better data protection
  • Keep records of where you used AI if your work is subject to audit
  • Do not let AI make final decisions on consequential matters — keep humans accountable

Quick check: Before pasting anything into an AI tool, ask yourself: "Would I be comfortable if this text appeared in a news article tomorrow?" If not — anonymise it or do not share it.

Bottom line

AI is a powerful work tool and using it well puts you ahead. But carelessness with data can create serious legal, reputational, and professional risks. The habit to build is simple: before sharing anything with an AI, pause for two seconds and ask whether it contains information that should stay private.