The Vercel breach is the Heroku/Travis CI playbook, rerun through an AI tool
A compromised OAuth token at a small AI productivity company gave attackers a path into Vercel's internal systems. The structural pattern is four years old. AI tools are making it worse.
A Vercel employee signed up for an AI productivity tool called Context.ai using their corporate Google Workspace account. Months later, an attacker used that tool’s stored OAuth token to access the employee’s email, pivot into Vercel’s internal systems, and decrypt customer environment variables. The breach started with a Roblox cheat script.
That sequence sounds absurd until you trace it step by step. Then it sounds familiar.
The pattern underneath
In April 2022, attackers stole OAuth user tokens issued to Heroku and Travis CI, then used them to download private repositories from dozens of GitHub-connected organizations, including npm. GitHub confirmed the tokens were stolen from the integrators, not from GitHub itself. The structural mechanic was simple: a developer platform trusted by thousands of downstream customers stored OAuth tokens that granted access across its entire user base. Compromise the integrator, compromise everyone who trusted it.
The Vercel breach, disclosed in April 2026, runs the same play. A Context.ai employee downloaded Roblox “auto-farm scripts and executors” in February 2026. The download carried Lumma Stealer, an infostealer that harvested corporate credentials including Google Workspace logins and API keys for Supabase, Datadog, and Authkit. Hudson Rock confirmed the Lumma Stealer attribution.
In March, the attacker used those stolen credentials to access Context.ai’s AWS environment. Context.ai detected and blocked the unauthorized AWS access, but the OAuth tokens stored in their infrastructure for user integrations were already exfiltrated. Whether Context.ai notified downstream OAuth users at that point has not been confirmed.
By April, the attacker had what mattered: an OAuth token granting read/write access to a Vercel employee’s Gmail, Google Drive, and Calendar. OAuth tokens bypass MFA once issued. The attacker never needed the employee’s password. From the Google Workspace account, they reached the employee’s Vercel dashboard, internal Linear instance, and customer environment variable store.
Two breaches, four years apart, identical structure. A third-party tool aggregates OAuth tokens across many enterprises. Its backend gets compromised. Every user who granted access is exposed.
What the data actually shows
The more interesting detail is not the attack chain. It is the class of data that was exposed and why.
Vercel’s architecture distinguishes “sensitive” and “non-sensitive” environment variables. Sensitive variables are encrypted at rest and cannot be decrypted by Vercel’s own systems. Those were not compromised. Non-sensitive variables are stored in a form that can be decrypted to plaintext. The attacker enumerated and decrypted the non-sensitive ones.
The label “non-sensitive” obscures the operational reality. GitGuardian’s analysis noted that many of these values are functionally critical secrets: API keys, tokens, database credentials, signing keys. Most developers had not toggled the sensitive flag because it was opt-in, not the default. The gap between a platform’s security taxonomy and what developers actually store there is where breaches find room to work.
One customer reported receiving a leaked-key notification from OpenAI on April 10, nine days before Vercel’s public disclosure, for an API key that had never existed outside Vercel’s environment. That is the strongest public evidence that exfiltrated credentials were used in the wild before Vercel acted. Solana DEX Orca confirmed it rotated all deployment credentials as a precaution, though on-chain protocol and user funds were not affected.
A threat actor operating under the ShinyHunters banner listed allegedly stolen data for $2 million on BreachForums: API keys, source code, 580 employee records, internal deployment access. Google Threat Intelligence Group assessed the ShinyHunters attribution as “likely an imposter attempting to use an established name.” Whether the listing represents verified data has not been independently confirmed.
The AI tool amplifier
The Vercel breach adds two elements the 2022 Heroku/Travis CI incident lacked.
First, scope inflation. AI productivity tools request unusually broad OAuth permissions to function. Email, calendar, drive, workspace. Context.ai’s “AI Office Suite” needed read/write access across Google Workspace to deliver its features. Each scope grant is a potential lateral movement path if the token is stolen. Traditional developer integrations (a CI system reading a Git repo, a monitoring tool pulling metrics) tend to request narrower scopes tied to a specific workflow. AI tools that promise to “understand your work” need access to most of it.
Second, shadow adoption. Vercel was never an enterprise customer of Context.ai. A single employee signed up individually using their corporate Google account, wiring a third-party AI platform’s token store into Vercel’s identity perimeter without a formal security review. No procurement process. No security assessment. No visibility for the security team. The employee was not doing anything unusual by 2026 standards; signing up for AI tools with a work account is routine. That is the problem.
The combination is what matters. Shadow-adopted tools that request broad scopes and store long-lived tokens in a centralized backend create exactly the conditions that made Heroku and Travis CI attractive targets in 2022. The difference is that there are more of these tools now, they request broader access, and organizations have less visibility into which ones their employees are using.
What this means for prioritization
If your organization uses Vercel, the immediate action is straightforward: rotate any environment variable not marked “sensitive” and enable the sensitive flag for all credential-type variables going forward. Vercel has announced that “sensitive” will become the default for new environment variable creation.
The structural action is harder and more important. Most organizations have never audited their active OAuth grants to AI tools. Google Workspace Admin Console and Microsoft 365 Admin Center both provide visibility into which third-party applications hold OAuth tokens for your users. Running that audit will likely surface grants you did not know existed, to tools no one formally approved, with scopes broader than the tool needs.
The pattern to break: an employee signs up for an AI tool, grants OAuth access to their corporate identity provider, and the tool stores a long-lived token in its backend. If that backend is compromised, MFA does not help. The token already passed authentication. The attacker inherits whatever access the employee granted.
Requiring security sign-off before any AI tool receives OAuth access to production identity providers is the structural control. Quarterly OAuth grant audits catch the grants that slip through. Moving secrets into dedicated vaults (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) reduces the blast radius when a platform’s “non-sensitive” storage turns out to be holding critical credentials. PatchDay Alert tracks credential exposure incidents like this one alongside the daily CVE feed, because the operational priority is the same: rotate what’s exposed, close the path, audit what else is reachable.
What to watch
Vercel’s disclosure timeline raises questions that do not have public answers yet. The estimated infection at Context.ai was February 2026. Vercel’s first public bulletin was April 19, the same day as the BreachForums listing. The total number of affected customers has not been disclosed.
More concerning: Mandiant’s forensic review uncovered a separate, earlier compromise predating the Context.ai intrusion, attributed to “social engineering, malware, or other methods.” Vercel disclosed this second breach on April 23, four days after the initial bulletin. The full scope and origin of that earlier compromise remain unclear.
The question worth tracking is not whether Vercel handled this well or poorly. It is whether Context.ai notified downstream OAuth users in March, when it discovered its own AWS breach but before the tokens were used against Vercel. If it did not, and if that gap allowed weeks of undetected access, then the notification obligations for OAuth integrators are the structural gap that needs closing. NIST IR 8587 (draft, December 2025) addresses token protection guidance, but notification timelines for downstream token holders remain undefined. That is where the next version of this breach will find its opening.
Sources
- Vercel April 2026 security incident (Vercel Knowledge Base)
- Breaking: Vercel Breach Linked to Infostealer Infection at Context.ai (Hudson Rock / InfoStealers)
- Vercel April 2026 Incident: Non-Sensitive Environment Variables Need Investigation Too (GitGuardian)
- Vercel's security breach started with malware disguised as Roblox cheats (CyberScoop)
- Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials (The Hacker News)
- Hack at Vercel sends crypto developers scrambling to lock down API keys (CoinDesk)
- The Vercel Breach: OAuth Supply Chain Attack Exposes the Hidden Risk in Platform Environment Variables (Trend Micro)
- GitHub OAuth tokens stolen from Heroku and Travis CI used to download private repositories (2022)
Share
Related field notes
-
50 CVEs in 18 months is not a growing pain. It's a design choice the industry keeps making.
MCP went from unknown to default AI integration in under two years. The vulnerability count, the OWASP Top 10, and the simultaneous client failures tell a story about what happens when adoption is the only metric.
-
Three hours was the good outcome: npm's trust model and the Axios compromise
A DPRK threat actor backdoored two Axios versions on npm. Socket flagged the malicious dependency in six minutes. Nothing stopped the downstream publish fifteen minutes later. The system worked exactly as designed.
-
Spirit Airlines is dead. Its attack surface isn't.
The security story isn't that an airline went bankrupt. It's what happens to 132 APIs, years of customer PII, and a cloud footprint when a company dies overnight and nobody is left to decommission it.