In February 2024, the polyfill.io domain and associated GitHub repository were sold to a Chinese CDN operator named Funnull. Five months later, in June 2024, the domain began serving malicious JavaScript to an estimated 100,000 to 500,000 websites that included <script src="https://cdn.polyfill.io/v3/polyfill.min.js"> in their HTML. The payload redirected mobile users to sports betting and adult content sites, with conditional targeting to evade detection.
The incident is a textbook example of what happens when a piece of JavaScript infrastructure you trusted ten years ago gets sold to someone else, and you never renegotiated that trust.
What exactly did the attackers change?
The attackers modified the script-serving logic on the polyfill.io edge to dynamically inject malicious code based on client fingerprinting.
The original polyfill.io service returned a minified bundle of ECMAScript polyfills tailored to the requesting User-Agent. After the handover, the edge started appending extra JavaScript that checked the client fingerprint (User-Agent, referrer, time of day, presence of dev tools, existence of specific tracking cookies) and, when conditions matched, injected a redirect. Desktop users and clients that looked like monitoring systems were served the clean polyfill bundle, which is why automated scanners took weeks to flag the issue.
The injection was server-side, meaning the response body varied per request. There was no single "bad file" to checksum. A site owner fetching polyfill.min.js from their office on Chrome Desktop would see clean code every time. A mobile user on a Brazilian ISP at 2 AM would get redirected to a scam. This variability is the core reason detection lagged.
How did the handover become a supply chain attack?
The domain changed hands, the handover was publicly announced, and sites kept including the script tag without revisiting trust.
Andrew Betts, the original maintainer of polyfill.io, publicly warned in February 2024 that no existing website should be using polyfill.io and that browsers had implemented the relevant features natively. That warning circulated in security circles, but the broad web ecosystem did not respond. Documentation, blog posts, and CMS templates from 2015-2020 still referenced cdn.polyfill.io, and those references kept working because the new owners initially served clean content.
Cloudflare and Fastly responded by creating drop-in replacement mirrors pointing at snapshots of the original polyfill library, and eventually Namecheap and Cloudflare blocked the domain at the DNS and edge levels. But for the window between June 2024 and the takedowns, the attack was active.
Why did so many sites include this script?
Polyfill.io was recommended by major frameworks, browsers, and tutorials for nearly a decade.
The original service was promoted by the Financial Times, endorsed by Mozilla, and embedded in Drupal, WordPress, and Magento templates. Large organizations including government sites, airlines, and ecommerce platforms had script tags left over from modernization projects. At the point of compromise, public scans showed polyfill.io embedded in major sites including JSTOR, Intuit, Mercedes-Benz, and many others. Most of these sites had no active engineering relationship with polyfill.io; the script tag was load-bearing infrastructure that nobody was reviewing.
This is the core pattern: third-party JavaScript from a "trusted" source accumulates in codebases for years, and trust is never re-evaluated when the underlying party changes.
Why didn't SRI catch this?
Almost no sites used subresource integrity hashes for polyfill.io, and the ones that did would have broken because the response body was dynamic per-request.
SRI works when the resource is static: you compute a hash once, embed it in the <script integrity="..."> attribute, and the browser refuses the file if its hash does not match. Polyfill.io's design required per-user response bodies because the whole point was to tailor polyfills to the User-Agent. That made SRI incompatible with the service. Users had a choice: use polyfill.io and accept no integrity check, or host a static polyfill bundle themselves with SRI.
Most users chose convenience. This pattern (dynamic CDN content where SRI is not applicable) is the structural weakness. Any dynamic JS CDN where you cannot pin bytes is an implicit trust bet on the operator.
What should organizations actually do post-incident?
Three things, in priority order: inventory, self-host, and monitor.
Inventory every third-party script in every customer-facing property. Most organizations do not have this list. Run a crawler against your public domains, extract all <script src> entries, and group by origin. Flag anything where the origin is not under your DNS control or a well-understood partner. For each flagged origin, ask: what happens if that domain is sold, expired, or compromised?
Self-host where possible. For libraries that are essentially static (which polyfill.io was, conceptually), pull a known-good version, serve it from your own infrastructure with SRI hashes, and accept the modest maintenance burden. For dynamic services, evaluate whether the dynamism is actually worth the trust expense.
Monitor the rest. Build alerting for changes to third-party script bytes using HTTP canary probes from multiple client profiles (mobile, desktop, varied geographies). If the hash of a resource changes unexpectedly, that should page. Several open-source tools do this, and they would have detected polyfill.io variance within days.
What about the "legacy includes" problem more broadly?
Polyfill.io is one case of a much larger pattern: dependencies that were added years ago, for reasons that may no longer apply, are still load-bearing.
Every organization of any size has a backlog of third-party scripts, libraries, or services that were integrated during a different era of the company. A tracking pixel from a marketing vendor that merged into a different company three years ago. A comments widget from a service that pivoted. A fonts host with new ownership. These accumulate as silent liabilities. The incident triggers a one-off scramble to audit, and then the audit is shelved until the next incident.
The systematic answer is continuous inventory. Every website, every email template, every admin panel should have an automatically maintained list of external origins it loads from, with metadata on who owns each origin, when it was last reviewed, and what the blast radius is if it is compromised. Modern SBOM tooling can include third-party web assets alongside code dependencies; most organizations have not wired this in yet. The cost is moderate and the benefit is directly incident-shaped: when a polyfill.io or an analytics vendor or a chat widget gets compromised, the time from "incident reported" to "we know our exposure" is minutes, not days.
The other piece is deprecation discipline. When a dependency is no longer needed, remove it. Do not leave it as a script tag "just in case." Every unused include is a latent compromise.
How Safeguard.sh Helps
Safeguard.sh's reachability analysis treats third-party browser-loaded JavaScript as first-class supply chain dependencies and applies the same 60-80% noise reduction to prioritize customer-facing exposures over internal SCA alerts. Griffin AI maintains behavioral baselines for externally served assets and flags content drift, conditional payload injection, and fingerprint-based delivery patterns like the polyfill.io attack used. The SBOM pipeline includes CDN and frontend dependencies at 100-level depth, so teams can answer "which properties load which scripts from which origins" in seconds. TPRM flows gate third-party JavaScript suppliers under ownership-change triggers, and container self-healing automatically replaces affected build artifacts when upstream trust signals shift.