PatchDay Alert
MAY 5, 2026
Analysis · 6 min read By PatchDay Alert Editorial Desk

The security work that landed on ops

Cloud shared responsibility, compliance mandates, and insecure defaults have quietly moved security execution onto ops teams that were never staffed for it.

The security work that landed on ops

Ops teams are doing security work. Not in the strategic, threat-modeling sense. In the “someone has to patch 164 CVEs from one Microsoft release, configure identity controls across three cloud providers, and document compliance evidence for two audits this quarter” sense. The work landed on them not because anyone decided they were the right team, but because the alternative was nobody.

The pattern underneath the workload

The standard explanation is “shift left.” Security practices move earlier in the pipeline so problems get caught before production. That framing makes sense for development workflows. What actually happened in operations is different: responsibility shifted, but headcount, tooling budgets, and authority didn’t follow.

Three forces converged. Cloud shared responsibility models from AWS, Azure, and GCP explicitly assign OS patching, identity configuration, network controls, and data classification to whoever runs the infrastructure. That’s ops. CISA’s BOD 22-01 sets hard remediation clocks: KEV-listed vulnerabilities must be patched within 14 to 21 days of listing. PCI DSS v4.0.1, mandatory since March 2025, requires critical patches within 30 days and non-critical within 90. And vendors keep shipping products with insecure defaults. CISA’s January 2025 “Secure by Demand” guidance noted that many OT and ICS products still arrive with weak authentication, known vulnerabilities, limited logging, and insecure default settings. Over 300 manufacturers signed CISA’s Secure by Design pledge, a forward-looking commitment that also signals how far the baseline still has to move.

Each of these forces is reasonable in isolation. Together, they created a job that didn’t exist ten years ago. The ops practitioner who is simultaneously a patch manager, compliance evidence collector, cloud security configurator, and vulnerability remediator, operating under binding federal deadlines, with no increase in staff.

Qualys VP Ivan Milenkovic put it directly in February 2026: “If we continue with the ‘shift left’ mentality of piling cognitive load onto developers, we will fail.” He was talking about developers. The same dynamic hit ops harder and with less intention.

What the numbers show

The staffing picture is the clearest signal. Only 11% of CISOs believe security teams are adequately staffed, per IANS Research, and security budget growth hit its lowest rate in five years at 4% in 2025. When staffing flatlines and mandates don’t, the gap gets absorbed by whoever is closest to the infrastructure.

The burnout data confirms where that absorption lands. SOC teams field an average of 4,484 alerts per day; according to Vectra’s 2023 research, 67% of those alerts are ignored. That’s not negligence. It’s triage under volume that exceeds capacity, and the ignored alerts aren’t disappearing. They’re becoming the backlog that ops inherits when something eventually breaks through.

The skills gap compounds both problems. Fortinet’s 2024 survey found 58% of IT and security leaders cite “staff lack necessary skills and training” as a top cause of breaches. Ops teams aren’t just absorbing more work. They’re absorbing work they weren’t trained for. Fortinet’s same report found nearly 90% of organizations experienced a breach they partially attribute to a lack of cyber skills.

Then there’s the compliance load. Vanta’s 2025 data shows compliance professionals now average 9.5 hours per week on compliance tasks, up from 8.1 in 2023. That’s 11 full working weeks per year spent on evidence collection and audit preparation. When the response to every new mandate is more documentation, the risk is what Security Boulevard described in April 2026 as “compliance theater”: optimizing for passing the audit rather than reducing actual risk.

Where the model breaks visibly

The cloud shared responsibility failures are the clearest signal that the current model doesn’t work as designed.

Toyota’s May 2023 cloud misconfiguration exposed 2.15 million customers’ data for nearly a decade. The cause was attributed to internal process failures, not platform defaults. Microsoft’s Storm-0558 breach in 2023 was severe enough for the Cyber Safety Review Board to call it a “cascade of security failures.” Microsoft then charged a premium license fee to retain the logs organizations needed to detect whether they’d been compromised. The platform vendor created the breach conditions, then monetized the detection.

UniSuper and Google Cloud in 2024 may be the most instructive case. A blank field in a provider-side provisioning tool deleted an entire customer subscription, locking a $125 billion pension fund and its 620,000 members out for two weeks. This was not a customer misconfiguration. It was a provider-side error inside a shared responsibility framework that assigns most operational burden to the customer.

IBM’s 2024 data quantifies the pattern: cloud misconfigurations account for 15% of all breaches, averaging $4.88 million per incident. Google’s own “shared fate” model, introduced as an alternative to shared responsibility, contains an implicit admission. The company described the older model as creating “rigid boundaries that lead to finger-pointing, blame, and abdication.” That’s the platform vendor naming the dysfunction out loud.

The CVE volume compounds everything. Over 40,000 CVEs were published in 2024, a 38% increase over 2023. Microsoft’s April 2026 Patch Tuesday alone dropped 164 CVEs, 131 of them Windows. More CVEs, more compliance mandates, same headcount.

Where it looks different

The organizations that manage this load tend to share a structural trait, not a procedural one. They embed security into the default path rather than bolting it onto a ticket queue.

Netflix’s “Paved Road” model builds security into the platform so teams don’t have to make separate security decisions. Spotify embedded vulnerability scanning directly into CI/CD pipelines, requiring no developer action to adopt. In both cases, the security work didn’t land on a separate team’s backlog. It became a property of the infrastructure itself.

OWASP’s Security Champions model attempts to formalize this at the team level: dedicated training, dedicated time, explicit authority. A 2023 UCL/ACM case study found the model fails when adopted in name only, when organizations appoint champions but don’t give them time or decision-making power. The gap between “we have a program” and “the program works” is whether the champion can actually say no to a release.

NIST’s Cybersecurity Framework 2.0, published February 2024, introduced a three-layer governance structure separating executive accountability from practitioner implementation. The framework doesn’t solve the staffing problem, but it names the structural issue: someone at the executive level has to own the risk, not just delegate the remediation.

The common thread in the data is that security outcomes improve when security decisions are embedded in the infrastructure and backed by organizational authority. They don’t improve when they’re added to an ops team’s existing queue.

What to watch

Two signals will indicate whether this pattern is stabilizing or getting worse. First, watch whether cloud providers start absorbing more of the shared responsibility surface, or whether the boundary stays where it is. Google’s “shared fate” language is a move in that direction, but language changes faster than default configurations. Second, watch the ratio of compliance mandates to compliance staffing. If the 9.5-hour weekly average keeps climbing without corresponding headcount, the gap between audit performance and actual security posture will widen.

Triage tools, including this newsletter, help with the “which CVEs matter” question. The harder question is who has the bandwidth to act on the answer, and whether the people writing the mandates have any interest in finding out.

Sources

Share

Related field notes

Get the digest

Free. Weekday mornings. Plain English CVE triage.

Check your inbox to confirm.