The standard playbook for this kind of alert is straightforward. We log into the Microsoft Defender portal (security.microsoft.com), pull up the device, and review the alert. Severity, file path, process, recommended action; it's all meant to be there in one place, however it wasn't. With every filter cleared, the device page in the security console was empty for any recent quarantine activity. The device itself looked fine; software inventory current, last seen within the hour, all health indicators green. The screenshots from the user clearly showed Defender had quarantined something; the cloud just wasn't admitting it.
That's the kind of mismatch that tends to make us pause. A device reporting healthy telemetry but with a real, user-visible incident that doesn't appear in the portal is a small mystery, and small mysteries in security work are worth chasing.
Good investigation starts with the boring possibilities. We worked through them:
Was it a partner-side licensing or access issue? We always operate via Microsoft's Granular Delegated Admin Privileges (GDAP), with dedicated customer-tenant accounts kept in reserve for cases where GDAP scoping might be obscuring something. There was a vague memory of certain Defender features needing partner-side licensing, but we ruled that out quickly: the customer's licences cover Defender data, and signing in directly with the dedicated customer-tenant administrator account produced the same empty result.
Was the device improperly onboarded to Defender for Endpoint? Also ruled out; the device showed up in Assets, was reporting telemetry, had recent timestamps, and had handled a previous quarantine event correctly only weeks earlier. The cloud connection was working.
So far so good; we'd narrowed the problem to "this specific incident, on a fully working setup, didn't surface in the portal."
Microsoft Defender on a Windows device is two related things at once: a local antivirus engine that runs on the machine itself, and a cloud telemetry pipeline that feeds the Defender portal hosted by Microsoft. They are connected, but they are not the same. The local engine sees and acts on a great deal of activity in real time; the cloud only ever sees what the local engine (and the EDR sensor sitting alongside it) decides is worth forwarding as an alert or incident.
That distinction is rarely visible until something falls into the gap. In our experience, certain classes of detection (particularly heuristic matches on non-file resources, lower-severity actions, and routine signature-based blocks of well-understood items) can be handled silently on the device and never produce a portal-side incident. Other practitioners in the small-business space have reported the same pattern with Controlled Folder Access blocks, where the device's own Defender history shows clear events that don't appear in the security portal even with all filters cleared (the Microsoft Q&A community has an instructive thread on exactly that scenario which is worth a read if you've ever wondered why).
The user saw it, the device knew about it, the cloud was completely unaware.
That's not a bug; it's how the product works. But it's not what most people would assume from looking at the portal.
There's also a second, related gap; this one tied directly to licensing. Microsoft 365 Business Premium ships with Defender for Business, which is a different (and more limited) product than Defender for Endpoint Plan 2, even though both are accessed through the same portal. Defender for Business gives you incidents, alerts, the device timeline, software inventory, and the Action Center. What it doesn't give you is the KQL-based Advanced Hunting interface that lets you query the raw EDR telemetry directly. Microsoft documents this plainly:
"Advanced hunting capabilities aren't included in Microsoft Defender for Business."
— Microsoft Learn, Advanced Hunting with PowerShell API Guide
The fuller comparison between Defender for Business and the Plan 2 product is set out in the Defender for Business FAQ on Microsoft Learn.
This is a meaningful product distinction that's easy to miss because the two products share a portal, and it shaped the rest of the investigation. With Plan 2 we'd have written a quick KQL query against the DeviceEvents table to expose the raw quarantine events directly, regardless of whether they'd produced an alert. With Defender for Business that option simply isn't on the menu, and there's no programmatic way to fill the gap from the cloud side without an upgrade to a tier most small businesses don't need.
For most small businesses on Business Premium, that combination (some events not reaching the cloud in the first place, and no Advanced Hunting to query what is there) is a gap worth knowing about. The cloud portal will show you a lot. It won't show you everything.
With cloud-side investigation exhausted, we moved to the device via remote PowerShell through our RMM. No screen takeover, no interruption to the user, no "can you reboot for me" phone call.
The local Defender PowerShell module is available on every Windows endpoint and provides direct access to the quarantine and detection history regardless of what the cloud knows. Within a couple of minutes we had a clear picture:
rootcert: resources (registry-stored root certificates) rather than files on diskThat last point was the really unusual one. Defender wasn't quarantining a downloaded executable or a suspicious script; it was quarantining two specific certificate thumbprints from the Windows trust store.
A quick search on the threat name and thumbprints surfaced the explanation immediately. Around 30 April 2026, Microsoft pushed a Defender signature update containing a new detection rule that incorrectly matched two legitimate DigiCert root certificates installed in the Windows trust store on machines worldwide:
The detection led Defender to quarantine the registry entries for those certificates, effectively removing them from the Windows trust store on affected systems. The signature was over-broad; written in response to a real DigiCert incident, but matching the legitimate root authorities rather than the specific compromised certificates Microsoft was trying to catch. A corrected signature was pushed shortly afterwards and distributed via the standard Defender update mechanism (BleepingComputer covered it well, here).
Now, this matters more than it might first appear; the DigiCert root certificates that were quarantined underpin a significant portion of internet TLS and code-signing trust. Removing them from the Windows trust store has consequences that don't always show up immediately:
Many users wouldn't notice straight away; Windows often has fallback trust paths, and many sites use other certificate authorities. But the longer those certificates are missing, the more likely something visibly breaks.
It's also worth noting why this detection in particular went silent on the cloud side. The Cerdigent rule was a heuristic match on a registry-stored certificate; there was no file hash, no process, no user-initiated activity for the alerting pipeline to attach itself to. That made it the kind of edge case which, combined with the Defender for Business limitations described earlier, was never going to produce a portal incident, even though the local engine clearly took action.
Once we knew what we were dealing with, the fix was straightforward. We confirmed the patched signatures were already installed (the auto-update mechanism had done its job), restored the quarantined certificates from the local Defender quarantine, verified them back in the trust store, and ran a fresh scan to confirm a clean state. End-to-end, around 30 minutes; the user kept working throughout.
There are three takeaways here that we think are worth holding onto, whether you're an Aztek Native client or not.
The cloud portal isn't the whole story. If you're on Microsoft 365 Business Premium (which is a great licence for small businesses), the security console you see at security.microsoft.com is genuinely useful, but it isn't comprehensive. Some classes of detection can be handled on the device without ever producing a cloud incident, and the Advanced Hunting tools that would let you go behind the portal aren't available at this licence tier. Anyone troubleshooting purely from the portal is potentially missing real events.
Global signature false positives happen. Microsoft is pushing detection updates constantly, and every once in a while one of them is too broad. This isn't the first time and it won't be the last. The remedy is recognising the pattern quickly; if multiple devices light up with the same unusual detection at roughly the same time, a vendor-side issue should be high on the list of suspects, not low.
Investigation tools matter. Local PowerShell against the device, an RMM that can run scripts non-disruptively, and the experience to know which path to take when the obvious one stops working; that's the toolkit that turns a worried-email-on-a-Saturday into a wrapped-up incident before lunch on Monday. For us this is everyday work; for a small business owner trying to interpret a Defender alert on their own laptop, it's the difference between confidence and a long Sunday.
| For Microsoft 365 Business Premium customers: the Defender portal is a useful tool, but it isn't the whole picture. Some local Defender events don't surface there at all, and the deeper investigation interface (Advanced Hunting) sits behind a higher licence tier. If a security alert ever has you wondering whether anyone's actually watching, that's the gap worth knowing about. |
If you've ever stared at a security alert and wondered whether to take it seriously or whether your IT setup would catch what the alert didn't say, that's the conversation we're here for. Let's have a chat; no obligation, just a friendly call.