PatchDay Alert
MAY 4, 2026
Analysis · 5 min read By Victor Hayes

50 CVEs in 18 months is not a growing pain. It's a design choice the industry keeps making.

MCP went from unknown to default AI integration in under two years. The vulnerability count, the OWASP Top 10, and the simultaneous client failures tell a story about what happens when adoption is the only metric.

50 CVEs in 18 months is not a growing pain. It's a design choice the industry keeps making.

The Model Context Protocol has been publicly available for about 18 months. In that time, the Vulnerable MCP Project has cataloged over 50 known vulnerabilities, 13 of them critical. OWASP had already published a dedicated Top 10 for it by 2025. Every major AI coding client shipped broken authentication at the same time. This is not a new protocol going through growing pains. This is what happens when an entire industry adopts an integration standard and nobody stops to ask what it does to the attack surface.

I’ve written about the STDIO design flaw specifically, and Claire covered the Vercel breach as an OAuth supply chain case. Those are individual data points. The aggregate picture is worse than either one suggests.

The numbers

PipeLab’s State of MCP Security 2026 report puts the ecosystem scan results in one place. They found 7,000+ publicly accessible MCP servers, 200,000+ potentially vulnerable instances, and 150 million aggregate downloads across affected packages. Of 2,614 MCP implementations analyzed, 82% use file operations prone to path traversal. 67% use code-injection-prone APIs. 34% are vulnerable to command injection.

492 MCP servers were discovered running with zero authentication.

The credential picture is equally bad. Researchers found 24,008 unique secrets exposed in MCP configuration files on public GitHub repositories. Of those, 2,117 were still valid at the time of the scan. That is 8.8% of exposed secrets, sitting on public GitHub, still working. And 88% of 5,200+ open-source MCP servers require credentials of some kind, but 53% rely on static API keys or personal access tokens rather than scoped, rotatable credentials.

Pynt, cited in PipeLab’s report, calculated that connecting to just 10 MCP servers gives you a 92% probability of encountering at least one exploitable vulnerability. Ten servers. Ninety-two percent.

Every client at once

The individual STDIO and OAuth stories are bad enough. The systemic version is that every major AI coding client shipped the same class of vulnerability simultaneously.

Cursor had two CVEs (CVE-2025-54136, CVE-2025-59944): a config bypass that let modified MCP configurations take effect without re-approval. Windsurf (CVE-2026-30615) needed zero user interaction; prompt injection via malicious HTML could silently modify MCP config. Claude Code (GHSA-9f65-56v6-gxw7) required explicit file modification approval but was still in the affected family. VS Code with GitHub Copilot and Gemini-CLI both required user interaction before MCP JSON edits, which is better, but the fact that the question even needed answering across every client tells you the protocol itself failed to specify the guardrail.

Then there is mcp-remote (CVE-2025-6514), the shared transport library used by Claude Desktop, Cursor, and Windsurf. CVSS 9.6, OS command injection, 437,000+ weekly downloads. A single dependency, a single critical flaw, three major clients exposed.

And Anthropic’s own reference implementation. mcp-server-git shipped with a triple chain of CVSS 10.0 vulnerabilities (CVE-2025-68143, CVE-2025-68144, CVE-2025-68145). The official example that every downstream developer was supposed to learn from. Three perfect 10s.

The OAuth layer

The authentication story is its own disaster. MCP’s specification provided no normative authentication requirements until the March 2026 update. For roughly 16 months, the protocol that was becoming the default way AI agents connect to services had nothing to say about how those connections should be authorized.

When OAuth support did ship, it arrived broken. Cloudflare’s workers-oauth-provider had a PKCE bypass (CVE-2025-4144). Redirect URI validation was implemented incorrectly (CVE-2025-4143). Localhost was assumed secure without authentication, which fell apart under DNS rebinding (CVE-2025-66414, CVE-2025-66416). Doyensec called it “the MCP AuthN/Z nightmare,” which is about as direct as a security research firm gets.

The first confirmed in-the-wild malicious MCP package, postmark-mcp, appeared in September 2025. It hit roughly 300 organizations. It was a supply chain test run on a protocol that had no authentication spec and minimal registry review. Nine months later, OWASP is writing top-10 lists because the vulnerability surface is already too large to address with individual advisories.

Why it keeps happening

The structural explanation is straightforward. MCP went from barely known to default AI agent integration faster than any comparable protocol in recent memory. The adoption curve was driven by AI companies that needed a standard way to give agents access to tools, and they needed it immediately.

The MCP spec shipped without normative auth. The official SDKs shipped without input sanitization. The reference server shipped with three CVSS 10.0 vulnerabilities. The marketplaces accepted malicious packages without review. Each of these is a place where someone chose speed over hardening. Not maliciously. Just predictably.

And the environment those agents land in is not prepared for them. Enterprise surveys consistently report that the vast majority of organizations have unsanctioned AI tool use, that most security teams have no inventory of which AI tools their employees have granted OAuth access to, and that AI-related data policy violations are a monthly occurrence, not an edge case. MCP is the protocol those AI agents are increasingly using to connect. Every MCP vulnerability compounds into that broader exposure.

What you should actually expect

The MCP specification will continue to improve. The March 2026 update was real progress. OAuth support, when correctly implemented, is a genuine improvement over “no auth at all.” The vulnerability count will plateau as researchers finish sweeping the early implementations and patches ship.

But the structural pattern will repeat. The next protocol, the next agent framework, the next integration standard will face the same adoption-before-hardening pressure. The incentive structure has not changed. Ship fast, fix later, and treat the security community’s free bug reports as a feature of the development process.

If you run MCP servers today, the practical posture is: treat every MCP server as untrusted until you have verified its authentication, its input validation, and its transport security individually. Audit your OAuth grants. Check your config files for exposed secrets (8.8% of them are still valid, remember). Containerize what you cannot upgrade. And assume that “connecting to 10 servers means a 92% chance of exploitation” is a statement about your current environment, not a hypothetical.

PatchDay Alert tracks MCP-related CVEs in the daily digest alongside everything else. They are not a special category. They are part of the patch queue now.

The real lesson

Fifty vulnerabilities in 18 months, an OWASP Top 10 for a protocol that did not exist two years ago, every major client broken at the same time, and the protocol designer calling the foundational flaw “expected behavior.” That is not a security failure. It is an adoption success that nobody bothered to secure. The industry looked at MCP, decided it was too useful to slow down, and shipped it anyway. The 50 CVEs are the receipt.

Sources

Share

Related field notes

Get the digest

Free. Weekday mornings. Plain English CVE triage.

Check your inbox to confirm.