In Part 1, we established that Handala didn't pick Stryker off a strategic target list and then figure out how to break in. They found access, recognized the value, and used it. That's still a deliberate, damaging attack—it just means the selection process looked a lot more like browsing a marketplace than writing an operation plan.

Part 2 is about what that looks like at scale, because "opportunistic" sounds almost harmless until you see the blast radius.

MOVEit is the clearest example of how this plays out when it gets handed a big enough lever. The Cl0p group found a vulnerability in Progress Software's MOVEit Transfer and ran it against everyone running a vulnerable, internet-exposed instance. Government agencies, financial institutions, healthcare providers, universities, energy companies. Not because each one made someone's list—each one was just sitting there when the exploit ran. The victim list wasn't curated at all; it was generated by exploitable vulnerability. Any organization that fit the technical profile got included, and "we're not strategically relevant to anyone's geopolitical agenda" was not a get-out-of-jail-free card.

It was not surgical. It was the digital equivalent of driving the monster truck Gravedigger through Times Square on New Year's Eve while waving to Ryan Seacrest and screaming “RYAN! RYAN!! LOOK AT ME!”

Supply chain compromise works the same way with more patience. You don't have to breach every organization in a chain to get into something upstream and inherit trust from there. Stryker sits inside a global medical supply chain: vendors, distributors, hospital networks, device maintenance operations, dozens of countries. Access there has value that extends well beyond Stryker itself. Whether they were the deliberate target or just the most exposed part of something bigger almost doesn't matter. The access existed. Someone used it.

Legacy systems are their own category of gift to attackers, and I don't mean that in a nice way. They run on known configurations with known weaknesses, and they tend to get monitored less because everyone's too nervous to touch them in the first place. The "if it ain't broke don't fix it" crowd is essentially running a loyalty program for threat actors—known entry points, known movement paths, and nobody looking too closely. It's not a gamble for an attacker. It's a reliable plan.

None of this means threat actors are opportunistic amateurs swinging at whatever's in front of them. Salt Typhoon sitting silently inside US telecom infrastructure for years required real capability, real resources, and real patience. Nation-state groups absolutely develop custom tooling, invest in zero-days, and run long-horizon operations when the mission requires it. Those are just the exception, not the default. And even those operations needed a foothold somewhere. Nobody burns a $40,000 zero-day on a target they could have accessed for $1,300 on a forum. You save the expensive stuff for when you actually need it and fill everything else in with what's already on the shelf.

The "surgical nation-state actor" framing does real damage to how people understand their own exposure. Large organizations assume they'd know if they were being specifically targeted. Small ones assume they're too boring to show up on anyone's radar. Both are wrong for the same reason: the IAB (Initial Access Broker) market doesn't care about your strategic importance. It cares about whether your VPN credentials are floating around somewhere.

"We're too small to matter" sounds reasonable until you look at how access actually gets priced and sold. IABs sort by accessibility and price by revenue. A regional accounting firm with a misconfigured VPN is on the menu—not because an Iranian intelligence analyst put them there, but because the credential was available and someone needed it. The attack that follows doesn't consult your threat model. It just needs something reachable that pays.

So: patch things, including the old stuff nobody wants to schedule a maintenance window for. Reused credentials need to go. MFA belongs on every remote access point, not just the ones that feel important on a Tuesday. Shrink your internet-facing attack surface. Segment your networks so a foothold doesn't turn into a full house. And actually check what your vendors' security posture looks like, because you can do everything right and still inherit a compromise through someone upstream who didn't—and you won't know until it's already someone else's problem that became yours.

None of that makes a great story. There's no version where you're the named target of a perfectly executed nation-state operation. For most organizations, that story was always fiction.

The ones that get hit aren't usually the chosen ones. They're the ones that were reachable when someone went looking, or reachable when someone who already had the access decided the timing was right.

Stryker got hit because the access existed and Handala knew exactly when to use it. The lesson isn’t that you need to disappear from the threat landscape. It’s that you need to make your environment harder to access, harder to move through, and harder to turn into something valuable.

That’s what all of this actually comes down to. Patching exposed systems. Eliminating reused credentials. Enforcing MFA everywhere it matters. Segmenting networks so access doesn’t become control. Understanding your external exposure. Holding vendors to a standard that doesn’t become your problem later.

None of those are abstract best practices. They’re the difference between access that sits unused and access that turns into an incident. Because in a market where access is bought and sold every day, attackers aren’t building target lists. They’re buying what works.

Part 1 asked what your environment would be worth to someone who already had access. This is the other side of that equation. The goal isn’t to be invisible.

It’s to make sure your access never becomes worth the price.