AI is the latest shiny object in tech, and your team is probably already using it. But here's the uncomfortable truth: too many people are treating AI like it’s their smartest engineer—when in reality, it’s the intern who just showed up with your morning coffee. 

That’s not an insult. It’s reality. 

Today’s generative AI tools are powerful, but they don’t know your environment, your clients, or your context. They don’t understand regulatory nuance, operational risk, or legal liability. They don’t ask follow-up questions. They just take what you give them and run with it. 

Which brings us to the real risk: Would you hand your intern a list of social security numbers and say, “Sort these out for me?” Would you trust that intern to decide what information is confidential or mission-critical? Of course not. So why are we doing it with AI? 

The Intern Analogy Your Team And Clients Will Understand 

Here’s how to explain it internally and externally: AI is like an intern. It can be helpful, but it lacks judgment, context, and history. It's a fast processor, not a decision-maker. You wouldn’t tell a new intern about the office gossip, the high-profile M&A deal about to close, or the personal details of your CEO. The same guardrails must apply to AI. 

Before your engineers or your clients start dropping sensitive data into AI prompts, ask: Would I be comfortable telling this to an intern who’s only been here a week? 

That framing changes the conversation. It takes AI out of the realm of “magic box that knows everything” and puts it back where it belongs—as a tool that must be supervised, guided, and restricted. 

AI Isn’t Secure by Default And Your Clients Don’t Know That 

One of the biggest misconceptions around AI is that it's inherently private or secure. It's not. Data entered into some AI tools may be used for training. That means if your tech drops in client configurations, passwords, or internal procedures, that information could be stored or used by third parties—even unknowingly. 

If you think you’re immune because you’re using “enterprise-grade” AI tools, remember this: security isn’t about tools, it’s about behavior. It doesn’t matter how good the AI platform is if your team or your clients are feeding it toxic data. 

As an MSP, you have to lead that conversation. 

What You Should Be Teaching Your Team Right Now 

Train your engineers to treat AI like an intern: 

  • Don’t share anything you wouldn’t want public. 
  • Don’t assume the AI understands your business. 
  • Don’t outsource final decisions to a tool that can’t be held accountable. 

Give it specific tasks. Provide it with context. Check its work. And document what you’re using it for. 

Then, teach your clients the same. They need to hear it from you before they accidentally feed their contract pipeline into a chatbot that wasn’t designed to protect sensitive business data. 

Don’t Just Talk Cyber Hygiene. Test It 

This intern-AI conversation is also the perfect segue into a larger conversation about cyber hygiene. Because here’s the real danger: your network is already full of data that doesn’t belong where it is. Old spreadsheets. Password dumps. Access credentials saved to desktops. 

AI isn’t the only risk—it’s just the newest one. If your environment isn’t clean, anything that interacts with it becomes a liability. 

The first step? Run a Level 1 Pen Test. Not a sales tool. A real-world look at what’s actually sitting on your network right now. Because before you can educate your team on AI boundaries, you need to understand what data is at risk if those boundaries are crossed. 

Too many MSPs think they’re covered because their stack is solid. Firewalls are configured, MFA is enforced, backups are verified. But if your engineers are leaving credentials on their machines, and your clients are saving personal data to shared folders, the stack doesn’t matter. The risk is already sitting inside the perimeter. 

Cyber Liability Starts with You 

Your clients trust you to keep them safe. But when AI enters the conversation, you need to reset expectations fast. Because if a client uploads proprietary data into ChatGPT, and that data is later leaked, you better believe their lawyers will ask what you did to prevent it. 

As Bruce McCully writes in Standardized, documentation is your best defense. If you can’t prove that you educated your clients, that you recommended clear AI usage policies, and that you gave them tools to protect their own data, you’ll be the one holding the bag. 

This isn’t theoretical. AI usage is already being pulled into lawsuits and insurance claims. And the burden is shifting to service providers to prove they took reasonable steps to protect their clients. 

That means you need a written policy. You need client education. And you need proof that both happened—before the incident, not after. 

AI Isn’t Optional—But Neither Is Discipline 

You can’t stop your team from using AI. You shouldn’t. The productivity gains are real. But what you can do is define the rules of engagement. 

If you don’t, someone else will. 

Make AI usage part of your onboarding. Build it into your Quarterly Security Briefings. Add AI policies to your Cyber Liability Guard program. Use it as a conversation starter to help clients understand that security isn’t just about firewalls. Rather it’s about behavior, decisions, and boundaries. 

And remember: if AI is the intern, then you’re the supervisor. That means the liability falls on you when something goes wrong. 

Final Thought: Don’t Be the MSP That Gets Burned 

AI can be your ally. It can accelerate documentation, boost productivity, and even help with security analysis. But it’s only as safe as the rules you wrap around it. 

Would you give your intern the password to your PSA? Your access to client RMMs? Then why are your engineers feeding that same data to a tool that can’t tell a firewall from a phishing link? 

Set boundaries. Clean up your environment. And run a pen test to find out what’s really exposed. 

Your clients are watching. So are the attackers. And if you don’t define the rules of AI now, the fallout is coming whether you’re ready or not.