Your security vendor's AI isn't making you safer. It's making you tired.
76% of cybersecurity professionals say the AI landscape is overwhelmed by overpromotion. The operational cost of that fatigue is starting to show up in the places that matter.
Every security vendor you deal with has an AI story now. Every one. At RSAC 2025, Google Cloud’s Anton Chuvakin described the vendor floor as “a plate with a lot of garnish — very visually stimulating garnish — but still no meat.” By RSAC 2026, Futurum Group counted 43% of exhibitors leading with “AI” as their primary message, with no proof, no specificity, and nothing a buyer could actually use.
That’s the condition. Not that AI is useless. That the signal-to-noise ratio in security AI marketing has collapsed so badly that the people who actually operate the tools have stopped being able to tell what’s real.
The survey data
Swimlane’s 2025 survey found 76% of cybersecurity professionals say the AI landscape is overwhelmed by overpromotion. ISC2’s workforce study (n=16,029) found 48% are exhausted just trying to stay current on threats and emerging technology. Proofpoint’s Voice of the CISO report (n=1,600 CISOs, 16 countries) found 63% experienced or witnessed burnout in the past year, a record, with AI governance responsibilities landing on security leaders without any redesign of their job.
The most telling number is a gap. Secure.com’s 2025 survey found only 25% of hands-on security operators strongly agree that AI tools improve their daily work, compared to 56% of CISOs. The people buying the tools and the people using them are having fundamentally different experiences.
Meanwhile, the adoption numbers don’t match the enthusiasm. SANS found exactly half of security organizations actively use GenAI, while 100% plan to. Gartner data shows 60% of CISOs are piloting GenAI in security, but only 20% see meaningful results. Deloitte found a 54-point gap between hoped-for AI revenue impact and actual AI revenue impact across the enterprise. MIT Technology Review ran a formal “AI hype correction” analysis in December 2025.
Gartner put generative AI in the Trough of Disillusionment in 2025.
The operational cost is not abstract
SOC teams are receiving thousands of alerts per day. When AI tools trained on noisy datasets add their own false positives to the pile, the effect isn’t “more coverage.” It’s more of the same desensitization that alert fatigue has been causing for years, now with a newer label. GovInfoSecurity quoted LBMC security leaders directly: “When AI tools produce a lot of false positives, the human monitoring those alerts experiences AI fatigue and exhaustion, leading to inefficiencies in identifying real vulnerabilities or threats.”
Tool sprawl is the daily version. Gurucul’s 2025 SOC report found 64% of well-resourced teams cite manual investigations and 59% cite tool sprawl as major operational problems. Organizations are running 20-plus alert-generating tools. 71% use more than 10 for cloud environments alone. Adding an AI layer to each one doesn’t reduce complexity. It multiplies it.
And the displacement effect is real. The Resilient Cyber “2025 AI Security Rewind” documented organizations with critical CVEs unaddressed for months, flat networks with no segmentation, shared credentials still in production, all while simultaneously piloting AI security tools. The World Economic Forum put it plainly: “The winners of any post-cyber bubble era won’t be those who buy the most ‘AI’ but rather organizations disciplined to ignore the hype and continue to master the fundamentals.” The Verizon 2025 DBIR backs this up. Nearly 80% of breaches still involve compromised credentials or weak access management. Not exotic AI-powered attacks. Credentials.
Forrester identified the flip side of the fatigue: cynicism is now suppressing adoption of tools that actually work. “Cynical attitudes lead to complacency, leaving organizations unprepared.” The hype didn’t just waste time. It poisoned the well for the tools that might have helped.
The vendors earned this
Gartner coined “agent washing” in 2025 to describe vendors rebranding existing RPA tools and scripted chatbots as agentic AI. They counted roughly 130 genuinely agentic products despite thousands of claims. At Black Hat 2025, post-conference analysis found “virtually every vendor has rebranded as an AI company” and that “AI-powered messaging collapsed under scrutiny” at several booths. Trend Micro rebranded its entire enterprise division as “TrendAI.”
The regulators noticed. The SEC filed against the former CEO of Nate, Inc. for claiming AI automated purchases that were actually done by contract workers manually. The FTC filed against Air AI for false conversational AI claims. Securities class actions alleging AI misrepresentation doubled between 2023 and 2024. None of these cases targeted a security vendor specifically, but the dynamic is structurally identical: vendor claims outrunning what the product actually does.
On Reddit and in security community forums, the sentiment is quantifiable. F5’s analysis found “AI slop” accounted for 17% of all expressed concerns among SecOps professionals in the second half of 2025, double the first half. Mentions of “excessive agency” grew 1,300% year-over-year on cybersecurity subreddits.
Gartner predicts 40% of agentic AI projects will be cancelled by end of 2027 due to poor ROI. Forrester predicts enterprises will defer 25% of planned AI spend to 2027. The correction is not coming. It’s here.
What the people who aren’t tired are doing differently
They start narrow. High-volume, low-stakes tasks first: alert triage, false-positive reduction, log correlation. They validate results independently, against pre-AI baselines, without trusting the vendor’s dashboard. They measure MTTR and detection rate improvements, not feature checkboxes. IANS Research documented this pattern in November 2025: the teams that skipped the narrow start and deployed AI broadly are the ones generating the cynicism.
They have a red-flag list for vendor pitches. SC Media synthesized practitioner criteria into three lenses: who is actually responsible for outcomes post-contract, whether the vendor gates pre-sales information (a lock-in signal), and whether the vendor can quantify operational impact rather than listing capabilities. The practitioner community has added its own checks: vague “autonomous SOC” claims with no reproducible testing, black-box models with no auditability, pricing that scales with panic rather than proven risk reduction.
JPMorgan Chase’s CISO published an open letter to vendors ahead of RSAC 2025, calling out the lack of reliability, accountability, and transparency. The letter was notable not for saying anything new, but for saying it from a position where vendors had to listen.
And they’re getting regulatory backing. NIST SP 800-53 Release 5.2.0 added fourteen new controls touching supply chain security in August 2025. California’s governor signed an executive order in April 2026 establishing an AI vendor certification framework. These aren’t theoretical. They give procurement teams external authority to push back on unsubstantiated AI claims.
The uncomfortable part
Some of the AI tools work. That’s what makes the fatigue dangerous. Swimlane documented a 7-person SOC team completing 5,000+ cases using AI automation, a milestone they never hit without it. The capability exists. It’s buried under so much marketing noise that the teams who need it most have stopped looking.
The fatigue doesn’t just waste time. It creates a selection problem. The organizations most exhausted by AI hype are the ones least likely to evaluate the next tool fairly, which means the organizations that would benefit most from genuine AI capabilities are the ones most likely to miss them. Forrester called this out directly. The cynicism is earned, but it’s not free.
If your vendor can’t tell you, in specific terms, what their AI does to your mean time to respond, your false-positive rate, or your analyst workload, and back it with data from a deployment that looks like yours, then what they’re selling you is a label. You’ve seen enough labels.
PatchDay Alert uses AI for one thing: answering the five questions you’re actually asking at 9 AM. Is this being exploited. Does it hit what I run. What breaks if I wait. Can it hold until Thursday. What would change that answer. That’s the bar every AI security tool should be able to clear.
Sources
- AI in Cybersecurity: 2025 is the Year of 'Put Up or Shut Up' (Swimlane)
- 2024 ISC2 Cybersecurity Workforce Study
- 2025 Voice of the CISO Report (Proofpoint)
- State of AI Cybersecurity 2026 (Secure.com)
- SANS 2025 AI Survey: Measuring AI's Impact on Security Three Years Later
- Gartner: 60% of CISOs Piloting GenAI, Only 20% See Outcomes
- Gartner Says Generative AI Has Entered the Trough of Disillusionment
- Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027
- 2025 Pulse of the AI SOC Report (Gurucul)
- Verizon 2025 Data Breach Investigations Report
- Is the AI-Cyber Bubble About to Burst? (World Economic Forum)
- How Does SecOps Feel About AI? Excessive Agency (F5 Labs)
- SEC Charges Former Nate, Inc. CEO with AI Fraud
- FTC Sues Air AI Over Deceptive Conversational AI Claims
- Trend Micro Rebrands Enterprise Unit as TrendAI
- JPMorgan Chase CISO Open Letter to Third-Party Suppliers
- RSA 2025: AI's Promise vs. Security's Past — A Reality Check (Anton Chuvakin)
- Forrester's 2026 Technology & Security Predictions
- Year in Review: 2024 AI Securities Litigation Trends (WilmerHale)
- 2025 AI Security Rewind (Resilient Cyber)
Share
Related field notes
-
The security work that landed on ops
Cloud shared responsibility, compliance mandates, and insecure defaults have quietly moved security execution onto ops teams that were never staffed for it.
-
People problems wearing a server badge
The sysadmin job was sold as infrastructure. The actual job is diplomacy, and the burnout numbers show it.