If you think AI tools like ChatGPT are harmless for your clients, think again. 

Last week, Sam Altman—the CEO of OpenAI—publicly warned that conversations with ChatGPT are not covered under legal privilege. People using ChatGPT as a “therapist” or “confidant” are exposing their most private thoughts, company details, and even sensitive security issues to a platform that records and stores every word. 

For MSPs, this should set off every alarm bell you have. 

This isn’t a technology story. It’s a legal liability story. And if you aren’t warning your clients about it now, you’re leaving yourself wide open for the next lawsuit that comes when their AI “secret” becomes public. 

AI Is the New Confessional But with No Privacy 

Your clients’ employees are already using ChatGPT and other generative AI tools to: 

  • Draft sensitive emails 
  • Outline strategic business plans 
  • Ask “innocent” cybersecurity questions 
  • Even dump personal frustrations about leadership and finances 

And here’s the bombshell: None of that is privileged. None of it is protected. 

Once it’s typed into AI, it’s in a dataset forever. That information can be used to train models, leak in future outputs, and—most importantly—be subpoenaed. 

Think about it: a breach happens, a lawsuit follows, and the discovery process turns up transcripts of everything your client’s CFO has been dumping into ChatGPT about “backups that aren’t really working” or “unpatched systems we can’t afford to fix right now.” 

Congratulations. You just handed a lawyer all the evidence they need to bury your client—and drag your MSP right into the fire. 

Evidence Cuts Both Ways 

In my book Level Up, I tell MSPs over and over: evidence is either your best defense or your worst enemy. 

If you don’t control the evidence, you’re in trouble. And AI is creating a massive, uncontrolled evidence trail that lives outside your security stack. 

Your client thinks ChatGPT is just a “smart assistant.” In reality, it’s a public drop box for discovery lawyers. 

Let me be blunt: the first time one of your clients leaks a security vulnerability or an internal failure into AI, that transcript becomes the smoking gun in a negligence case. 

And who’s going to take the heat? The MSP. Because in 2025, clients don’t just sue for the breach. They sue for not warning them about the risk. 

The CFO Blind Spot: You’re About to Get Blindsided 

Here’s why this matters to your business today. 

CFOs—your clients’ ultimate risk managers—are completely unaware of the financial damage AI misuse can cause. According to vCSO research, CFOs rarely partner with IT on cyber risk management, even though cyberattacks can wipe out company value, lead to lawsuits, and void insurance coverage. 

Now add AI to that mix. Every careless keystroke into an AI platform is a future liability bomb that no cyber policy will cover. 

And make no mistake insurance carriers are already looking for ways to deny claims when AI is involved. 

Compliance and Legal Fallout 

This is the part MSPs keep missing: 

  • Data entered into AI is discoverable in court. 
  • Cyber insurers will use AI activity as evidence of negligence. 
  • Regulators will use AI activity as evidence of non-compliance. 

HIPAA, PCI, and SEC violations? All fair game. 

If you don’t have documented evidence that you warned clients about AI risks, then you own the liability. 

As I wrote in Standardized: The MSP’s Guide to Avoiding Lawsuits, your contracts don’t protect you. Only documentation does. 

How MSPs Must Respond Right Now 

This is no longer a “wait and see” moment. You must build an AI risk mitigation strategy into your client services today. 

Here’s the three-step action plan: 

  1. Educate and Warn: In Writing

Do not assume your clients know better. They don’t. 

Create a formal AI Use Policy for every client. It must clearly state: 

  • What data is prohibited from being shared with AI tools 
  • How AI-generated content must be vetted before use 
  • That no generative AI system is considered private or secure 

Deliver this in a Quarterly Security Briefing and make them sign a Risk Acceptance Document if they refuse to adopt it. 

If they sign off, you’ve transferred the risk. If they don’t, you own it. 

  1. Track AI-Related Incidents

Your stack must evolve. Start monitoring for AI platform usage on client networks. 

Why? Because the day a breach leads to an investigation, you need evidence that you: 

  • Knew AI was in use 
  • Warned the client 
  • Took reasonable steps to mitigate the risk 
  1. Launch Cyber Liability Essentials: Fast and Simple

If you haven’t rolled out a Cyber Liability Essentials program, you’re already behind. 

This is not a heavyweight compliance overhaul. It’s a lightweight, structured starting point designed to do one thing: get your clients on board quickly. 

With Cyber Liability Essentials, you: 

  • Rapidly implement basic policies (AI usage, incident response, password hygiene) 
  • Collect signatures that transfer risk back to the client 
  • Build a paper trail that protects your MSP when—not if—something goes wrong 

Once your clients experience how simple and effective Essentials is, it becomes easy to move them into a full compliance program over time. 

In short: Cyber Liability Essentials is your foot in the door to liability protection. It makes it easy for clients to take the first step, and it gives you the documentation you need to defend your MSP. 

Final Warning: AI Will Be Used Against You 

Here’s the bottom line: 

  • AI is recording every bad decision your clients make. 
  • Lawyers will use those transcripts to prove negligence. 
  • If you don’t create policies now, your MSP will be next on the list of defendants. 

Your role as an MSP isn’t just to manage IT. It’s to manage liability. And right now, AI is the single largest, least-controlled liability entering your clients’ networks. 

If you want to stay out of the courtroom, you must move first.