I was onsite with one of our MSP partners the other day. 

Yeah, I still do that. I like to see the war zone up close once in a while. 

So I walk into their office, say hi, and ask the receptionist where to get lunch. 

Without missing a beat, she taps her keyboard like she’s hacking into NORAD, then looks up and says: “Go to the diner down the street. Best lunch spot right now.” 

Great recommendation. But something caught my eye. 

She didn’t Google it. 

She used ChatGPT. 

I asked if I could take a look. 

Let me tell you, what I saw was enough to make a security guy spill his sandwich. 

Hundreds—and I mean hundreds—of conversations with the free public version of ChatGPT. 

Stuff like: 

  • “Can you help write a proposal for Client X?” 
  • “What’s the best price to quote for this firewall stack?” 
  • “Is this agreement good enough for a HIPAA client?” 

Folks… you are feeding the machine. 

And it’s happening in your own office. 

This isn’t one of your clients. This is your receptionist. Your techs. Your account managers. They’re dumping sensitive data into public AI engines like it’s confetti at a compliance party. 

And they have no idea what’s at stake. 

Let me be blunt: you are bleeding intellectual property. 

And your clients? Yeah, they’re doing it too. You just haven’t looked yet. 

That night, I sat down and started building something new for you. 

Introducing the AI Exposure / Readiness Report. 

It’s designed to help you—and your clients—understand just how bad the AI leak already is. 

We’re talking: 

  • Shadow AI usage no one’s monitoring 
  • Risky prompts being sent to public LLMs 
  • Departments accessing sensitive data in plain text 

You know… the stuff that’s going to come back and bite someone in the backside when a cyber personal injury lawyer gets ahold of it. 

This isn’t about selling some flashy new product. This is about evidence. About visibility. About giving you the ammo to walk into a client meeting and say, “Hey, we’ve got a problem. And here’s the proof.” 

And because I like to solve problems—not just point at them—we also built a new Acceptable Use of AI Wizard inside Cyber Liability Essentials. 

This thing will help you: 

  • Plug the holes 
  • Set real policies 
  • And actually govern AI use before it governs you 

So here’s what to do. 

  1. Get your own AI policy in place. Don’t delay this. Do it today. 
  2. Run the AI Exposure Report on your internal team. Find out what’s already out there. 
  3. Use it to open conversations with every single client and prospect. 

Because I promise you—they are doing this too. They just don’t know it’s a risk. 

Yet. 

Be the hero that points it out before someone else points the blame at you. 

Click here. Get the wizard. Run the report. 

Then go back to lunch. But maybe leave ChatGPT out of it.