The Intelligence Hub is the curated stream of detailed intelligence reports authored by Safeguard's model lineup — Griffin Zero, Griffin L, Eagle — when continuous platform operation surfaces a critical vulnerability or novel attack vector. Every dossier ships only after coordinated disclosure clears with the upstream maintainer, with the structured reasoning trace included verbatim alongside the proposed mitigation.
Six steps, in order. The pipeline is the same for every dossier on this page — and the same for every customer running the platform on their own code.
The platform's reachability scanners continuously sweep mirrored open-source code and customer SBOMs. A candidate is any dataflow path that lands on a known-sensitive sink with no upstream sanitiser.
Eagle scores every candidate by exploitability, blast radius, and prior-art similarity, then clusters near-duplicates into a single priority-queue entry so reviewers see one chain, not eighty variants.
A Griffin model runs the full end-to-end reasoning pass — hypothesis, call-graph traversal, sanitiser model, exploit feasibility. The structured trace from this pass is what later gets cited verbatim in the dossier.
A second Griffin instance, primed with a contrarian system prompt, tries to refute the first model's chain. Only candidates whose chain survives the disproof attempt advance to coordinated disclosure.
A senior engineer reproduces the chain by hand, drafts a minimal proof, and opens coordinated disclosure with the upstream maintainer. The 90-day window starts on maintainer acknowledgement.
Once the maintainer has shipped a patch and the window has elapsed, the AI agent assembles the public dossier from its own trace; a human reviewer signs the final document; it's published here and into the RSS feed.
Stable identifier of the form SG-INTEL-YYYY-NNNN. Never reused; reissued IDs are appended -R1, -R2.
The exact Griffin or Eagle build that produced the structured reasoning trace, with model checkpoint hash for reproducibility.
Critical, High, Medium, Low. Set from CVSS plus exploitability evidence the model observed, not from CVSS alone.
0.00 to 1.00. Lower bounded by the disproof pass — anything that survived adversarial refutation lands at or above 0.85.
Primary CWE plus a short list of contributing weakness classes. Used for cross-dossier clustering.
Package coordinates, version ranges, and the precise file paths and line ranges the chain traverses.
Source → transformations → sink, expressed as an ordered list of call-graph hops with the function signatures inlined.
Verbatim slice of the structured trace, in the HYPOTHESIS / CITED PATH / DISPROOF / PROPOSED PATCH format the platform emits.
The adversarial pass: what the contrarian model tried, why each attempt failed to refute the chain.
A concrete patch or a configuration change, narrow enough to land upstream without API breakage.
D0 detection, Dn maintainer notified, Dn patch landed, Dn public disclosure, Dn dossier published.
Upstream commit hash, advisory URL, related CWEs, related prior-art dossiers, external research credited.
The chain begins at the JSON transformer's type-tag fast-path and lands inside the plugin loader's deferred-require resolver, traversing nine call-graph hops across two packages. The transformer trusts a __sg_type field on incoming records, looks it up in a constructor registry initialised at module load, and applies the resulting class without verifying that the registry itself is intact. Because the registry is a plain object reachable via prototype, a prior pollution gadget anywhere in the same process — including the library's own merge helper, lib/util/merge.js:44 — silently grafts an attacker-chosen entry on top of the prototype chain.
The exploit is reachable from any service that logs an attacker-controlled request body before sanitisation, which is the common pattern across the affected dependents. Once the constructor lookup resolves to the polluted entry, the plugin loader's lazy-require path is invoked with the attacker-supplied module identifier; the loader then calls require() against the user-controlled string and immediately executes the module's top-level statements. The end-to-end primitive is unauthenticated remote code execution on the logging process, which on most affected deployments is the same process serving HTTP traffic.
The proposed mitigation is a one-line guard at lib/transformers/json.js:122 that rejects type-tag values whose prototype is not Object.prototype, plus a hard registry freeze applied at module load. We attached a patch to the disclosure that takes the worst-case decode time from microseconds to within the same order of magnitude — measurable but not detectable in practice. The maintainer accepted the patch with minor renaming and shipped it in 4.2.1 alongside an advisory.
Disclosure opened on D0+2 to logger-core's maintainer; acknowledgement landed inside 18 hours. The maintainer requested a 30-day extension to coordinate with two large downstream packagers that vendored the transformer; we granted it without objection. Patch shipped on D0+45, advisory cleared CVE assignment on D0+88, this dossier published on D0+92. No exploitation observed in our customer telemetry across the disclosure window.
HYPOTHESIS:
Untrusted __sg_type field in logger-core's JSON transformer
resolves to attacker-grafted constructor via prototype pollution,
reaching the plugin loader's lazy require() call.
CITED PATH (9 hops):
[1] http-server/middleware/body.js:71 -> req.body propagates raw JSON
[2] app/logger.js:34 -> logger.info(req.body) call site
[3] logger-core/index.js:118 -> entry into structured pipeline
[4] logger-core/lib/transformers/json.js:118 -> readTypeTag(record)
[5] logger-core/lib/transformers/json.js:154 -> registry[tag] lookup
[6] (prototype lookup) -> resolves polluted entry
[7] logger-core/lib/loader/plugin.js:62 -> resolvePluginById(entry.id)
[8] logger-core/lib/loader/plugin.js:78 -> require(idStr)
[9] <attacker module top-level> -> code execution
DISPROOF:
Attempt 1: sanitiser model claimed the registry is frozen at
module load. REFUTED at lib/registry.js:41 — Object.freeze is
called on the wrong object (the wrapper, not the table).
Attempt 2: claim that pollution gadget is not reachable.
REFUTED — lib/util/merge.js:44 accepts __proto__ keys when the
parser option deepMerge=true, which is the default for the
library's own config loader (lib/config/load.js:21).
Attempt 3: claim that require() rejects relative paths.
REFUTED — line 78 calls require.resolve() with paths option
pointing at process.cwd(), accepting any module name resolvable
from the consumer's node_modules.
PROPOSED PATCH:
--- a/lib/transformers/json.js
+++ b/lib/transformers/json.js
@@ -120,6 +120,9 @@ function readTypeTag(record) {
const tag = record.__sg_type;
if (typeof tag !== 'string') return null;
+ if (!Object.prototype.hasOwnProperty.call(registry, tag)) {
+ return null;
+ }
return registry[tag];
}
CONFIDENCE: 0.94 (survived 3-attempt disproof pass).The chain is structurally similar to several public prototype-pollution disclosures but novel in its reachability: the gadget lives in a transitive utility three hops down from any of the affected dashboards, and the sink is the dashboards' own runtime-config loader, not a third-party plugin. The polluting input is a JSON file uploaded through the dashboards' template-import surface, which all three affected products expose to authenticated tenant administrators. Privilege does not stay scoped to the tenant: the config loader is process-global and the polluted prototype affects subsequent requests across tenants.
Reachability was confirmed by running the platform's reachability proof against each dashboard's release artifact. The chain crosses obj-merge-helper's deepMergeRaw entry point (which still accepts __proto__ when the legacy options.preservePrototype flag is unset), traverses two pass-through adapters in dashboard-shell's import pipeline, and lands at the runtime-config loader's defaults-merge call. The loader then reads the now-polluted defaults during the next request lifecycle, returning attacker-chosen values for feature flags and access-control toggles.
Mitigation is a defence-in-depth pair: upstream, obj-merge-helper@2.4.7 hard-rejects __proto__ and constructor keys; downstream, dashboard-shell ships a wrapper that switches the import pipeline to a structured-clone path before merging. We proposed both patches; the maintainer of obj-merge-helper accepted the first within the window, and the dashboard-shell team picked up the wrapper in their 11.4 release.
Coordinated disclosure was multi-party: obj-merge-helper maintainer (D0+1), dashboard-shell maintainers (D0+1), and a private notice to the three affected dashboard vendors (D0+3). All five parties were on a shared timeline. Patches landed by D0+58; the affected dashboards rolled them out across customer instances by D0+74. Public dossier published D0+92.
HYPOTHESIS:
obj-merge-helper.deepMergeRaw accepts attacker-controlled
__proto__ key from an authenticated tenant-admin import,
poisoning the global config loader's defaults across tenants.
CITED PATH (7 hops):
[1] dashboard-shell/api/templates/import.ts:42
[2] dashboard-shell/lib/templates/normalise.ts:88
[3] obj-merge-helper/index.js:14 -> deepMergeRaw entry
[4] obj-merge-helper/lib/walk.js:31 -> recursive descent
[5] (prototype write) -> Object.prototype.flags.*
[6] dashboard-shell/lib/config/loader.ts:117 -> defaults merge
[7] next-request handler -> reads polluted flag
DISPROOF:
Attempt 1: tenant-scoped merge isolates the prototype.
REFUTED — defaults object is module-level singleton.
Attempt 2: import route requires admin role and is rate-limited.
PARTIALLY VALID but does not change the unauthorized-across-
tenants impact once the upload completes.
PROPOSED PATCH:
obj-merge-helper: hard-deny __proto__ and constructor keys.
dashboard-shell: switch import pipeline to structuredClone.
CONFIDENCE: 0.89 (cross-tenant impact verified end-to-end).Eagle's clustering pass surfaced twelve newly-published pypi packages whose names sit at Levenshtein distance one from a leading data-pipeline framework and three popular cloud-SDK extras. The publication times are tightly grouped — all within a 36-minute window across two author accounts — and the wheels share a deobfuscation routine, an exfiltration host pattern, and a near-identical setup.py post-install hook. Eagle's similarity score against the cluster centroid sits at 0.93 minimum and 0.99 maximum.
Each post-install script reads four files commonly mounted into CI/CD runners — ~/.aws/credentials, ~/.netrc, /run/secrets/*, and GITHUB_TOKEN from the environment — base64-encodes them, and POSTs to one of three short-lived collector hosts behind a rotating subdomain. The collector hosts share a TLS fingerprint and serve a small Go binary on a different port; we have not analysed the binary but record its hash for follow-up. The packages do not attempt any persistence on the runner beyond the single exfiltration call.
Mitigation is two-track. For the registry: we filed a takedown packet with the cluster's twelve package names, the two author handles, the three collector hostnames, and YARA signatures over the post-install script's deobfuscation routine. For customers: the platform issued automatic policy denials against the cluster within four minutes of detection, with a tenant-level allowlist override for any team that needed it. No customer telemetry recorded a successful install across the disclosure window.
The registry's security team confirmed receipt within 14 hours and completed takedown of all twelve packages within 36 hours. The two author accounts were locked. Disclosure to the registry happened on D0; coordinated public note from the registry on D0+11; this dossier on D0+78. Eagle's report includes the full cluster manifest, hash list, and a recommended hunt query for SBOM scanners.
HYPOTHESIS:
Twelve recent pypi uploads form a coordinated typosquat
campaign targeting CI/CD credential files; common author
fingerprint, common post-install primitive, common exfil host.
CITED PATH:
Cluster centroid features:
- setup.py post_install hook present, identical AST shape
- collector domain regex: [a-z]{8}\.exfil-host-[1-3]\.live
- identical b64+gzip deobfuscation routine, sha1 d3a4f...
- target files: ~/.aws/credentials, ~/.netrc, /run/secrets/*
Cluster members (12): redacted list, sha256 fingerprints in
references; intra-cluster cosine similarity >= 0.93.
DISPROOF:
Attempt 1: claim that hooks are dormant and never executed.
REFUTED — pip wheel installs invoke post_install on extract
when the file exists; verified via sandboxed install.
Attempt 2: claim that target files are not present on typical
runners. REFUTED — 9 of 12 packages list a build-system
requirement that is itself a credentialed-CI dependency,
biasing installs toward those runners.
PROPOSED MITIGATION:
Registry-side takedown filed; customer-side automatic policy
deny enacted; YARA signatures published for downstream hunt.
CONFIDENCE: 0.97 (cluster signal far above background).The Normalizer treats a URL string as canonicalised once a single allowlist pass returns success, but the allowlist runs after a partial decode that strips one of three percent-encoding layers. Attacker-supplied URLs that nest a percent-encoded loopback alias inside an alternative IP encoding survive the first decode, pass the allowlist, and are then fully decoded by the downstream HTTP client at request time. The end-to-end primitive is an outbound request from the gateway process to any host reachable on its private network, including the cloud-provider metadata service.
Reachability is broad. The library is used by twelve enterprise gateway products that each accept a webhook-target URL as part of tenant configuration. In every product we tested, the configuration path validates URLs by passing them through url-builder-core's allowlist API, which means the same bypass works across all twelve. The most damaging variant is the cloud-metadata redirect: in the eight products deployed on cloud providers with IMDSv1-compatible endpoints, a successful redirect returns scoped temporary credentials directly to the gateway's logs.
Mitigation is a strict-decode mode for the Normalizer that performs all percent-decoding before any allowlist evaluation, plus an explicit refusal of three classes of IP encoding (integer-form, hex-form, and short-form loopback aliases). We attached the patch to the disclosure with a test corpus of 312 known-bypass inputs; all 312 are rejected by the patched code. The maintainer accepted the patch and shipped url-builder-core 3.1.5 with the strict-decode default on.
Disclosure coordinated with the url-builder-core maintainer (D0+1) and a private notice to each of the twelve downstream gateway vendors (D0+3). Two vendors requested an extended embargo to coordinate with their largest customers; we granted it. Patch landed upstream on D0+39, gateway rollouts completed across vendors by D0+85, dossier published D0+92.
HYPOTHESIS:
Normalizer's allowlist runs against a partially-decoded URL,
allowing a doubly-encoded loopback alias to pass through and
resolve to 127.0.0.1 / 169.254.169.254 at request time.
CITED PATH:
Normalizer.normalise(String url) at line 204
-> decodeOnce(url) at line 222
-> isAllowedHost(decoded) at line 261
-> returns canonical string with one remaining encoded layer
-> downstream HttpClient.execute() fully decodes
-> request sent to original target IP
DISPROOF:
Attempt 1: claim that the allowlist rejects IPv4 literals.
REFUTED — allowlist matches on hostname after decode, but the
bypass payload still parses as a hostname (e.g. encoded
%2531%2532%2537%252E... resolves only on the second decode).
Attempt 2: claim the HTTP client refuses private-range targets.
REFUTED — only two of the twelve gateway products configure
such a refusal; the rest rely on url-builder-core's allowlist
as the sole boundary.
PROPOSED PATCH:
Add Normalizer.strictDecode(true) default; refuse integer- and
hex-form IPv4 literals and short-form loopback aliases.
CONFIDENCE: 0.91 (cross-vendor reachability verified).The regex driving the quoted-local-part matcher is structurally susceptible to catastrophic backtracking when the input contains a long run of escaped quote characters followed by an unmatched terminator. The library's documentation does not state that the validator should be called on untrusted input, but a survey of dependent code shows the function is almost always invoked on request headers, query parameters, or form fields without an upstream length limit. A 1.4 KB header is enough to push event-loop latency past 30 seconds on a modern Node runtime.
The chain is reachable from any HTTP server that calls email-check-lite on a header value before length-limiting it. In four of the five frameworks we surveyed, the validator is wired into a default middleware that runs before user code; in the fifth, the framework's documentation example shows the validator being called inline on req.headers['x-customer-email'] with no guard. None of the surveyed configurations limit input length below the threshold at which backtracking becomes catastrophic.
Mitigation is a deterministic rewrite of the quoted-local-part regex that preserves the original grammar (verified against the library's existing 318-case test corpus) while collapsing the worst case from exponential to linear in input length. We attached both the rewrite and a fuzzing harness based on the library's grammar. The maintainer accepted the rewrite, added an explicit max-length parameter defaulting to 254 octets, and shipped 5.4.0.
Coordinated disclosure opened D0+1, maintainer acknowledged D0+1, patch landed D0+27, advisory cleared CVE assignment D0+74, this dossier published D0+92. No customer telemetry observed exploitation during the disclosure window; downstream patch rollout is still in progress as of publication.
HYPOTHESIS:
Quoted-local-part regex in email-check-lite exhibits
catastrophic backtracking on inputs of the form
"\\\\\\\\\\\\...x , consuming event-loop time linear in attempts
but exponential in input length.
CITED PATH:
src/index.ts:14 -> validate(input)
src/local-part.ts:71 -> match against /^(?:\\\\.|[^\\\\"])*"$/
-> backtracking when terminator is absent
DISPROOF:
Attempt 1: claim that input length is bounded upstream.
REFUTED — surveyed 5 framework defaults; none enforce a
sub-catastrophic length limit before calling validate().
Attempt 2: claim Node's regex engine cuts catastrophic paths.
REFUTED — measured 47-second stall on 1.4 KB input on
Node 20 LTS in a clean process.
PROPOSED PATCH:
Deterministic rewrite of quoted-local-part regex; explicit
max-length parameter defaulting to 254 octets.
CONFIDENCE: 0.86 (input bound depends on caller context).This is a novel attack class within the agentic-coding ecosystem rather than a single bug. The MCP filesystem-bridge server exposes a read tool that returns file contents to the model in a structured envelope, but the envelope does not delimit user-controlled file contents from MCP control directives. A coding agent that reads an attacker-supplied repository file therefore parses the file's content as if it were instructions, including a synthetic tool-call request to the same server's exec handler with attacker-chosen arguments.
The end-to-end chain is: developer asks the agent to summarise an untrusted repository, the agent reads a poisoned source file (commonly hidden inside a Markdown comment or a docstring), the file's contents include a forged MCP tool-call directive that the agent's prompt template does not isolate, and the agent emits a real exec call with the attacker's payload. The payload exfiltrates the developer's local credentials — git config tokens, npm tokens, environment variables from the agent's process — to a webhook URL embedded in the same poisoned file.
Mitigation has three layers. First, the MCP bridge ships a content-isolation envelope that wraps every file-read result in a structured, JSON-only block the agent's template treats as data, never as instructions. Second, the bridge enforces a configurable command allowlist on the exec handler, defaulting to deny. Third, agent-side, we coordinated with three popular agentic-coding clients to ship default-deny on cross-tool exec when the source of the previous tool call was a content read. All three layers landed within the disclosure window.
Disclosure coordinated with the mcp-fs-bridge maintainer (D0+1), three agentic-coding client teams (D0+2), and a private notice to two enterprise platforms (D0+4) that had embedded the bridge in their default tool list. Patch on the bridge landed D0+33; client-side mitigations landed staggered between D0+41 and D0+68; this dossier published D0+92. A follow-up dossier on broader prompt-injection-driven tool misuse is in coordinated review.
HYPOTHESIS:
mcp-fs-bridge's read handler returns untrusted file content
inside an envelope the agent's prompt template parses as
instruction-eligible, enabling a forged tool-call that
reaches the exec handler with attacker-chosen arguments.
CITED PATH:
agent: user prompt -> request: read(file)
bridge: handlers/read.ts:54 -> readFile(path)
bridge: handlers/read.ts:88 -> envelope = `<file>...content...</file>`
agent: prompt template inlines envelope verbatim
agent: parses content as containing tool-call directive
agent: emits exec(cmd=attacker_string)
bridge: handlers/exec.ts:104 -> spawn(cmd)
exfil: outbound to attacker webhook
DISPROOF:
Attempt 1: claim that envelopes are unambiguous to the agent.
REFUTED — reproduced on three popular agent templates with
a 4-line poisoned README; verified outbound exec call.
Attempt 2: claim exec handler requires explicit user confirm.
REFUTED — default config has confirm=false; surveyed
deployments overwhelmingly leave the default.
PROPOSED PATCH:
Bridge: ship content-isolation envelope (JSON data block
only, never instruction-eligible); default-deny exec.
Clients: cross-tool exec deny when prior tool was a read.
CONFIDENCE: 0.92 (novel class, reproducible across clients).The same dossier you skimmed above, rendered with every field populated: metadata table, technical write-up, full reasoning trace, disproof attempt, proposed mitigation, disclosure timeline, and references.
The chain begins at the JSON transformer's type-tag fast-path and lands inside the plugin loader's deferred-require resolver, traversing nine call-graph hops across two packages. The transformer trusts a __sg_type field on incoming records, looks it up in a constructor registry initialised at module load, and applies the resulting class without verifying that the registry itself is intact. Because the registry is a plain object reachable via prototype, a prior pollution gadget anywhere in the same process — including the library's own merge helper, lib/util/merge.js:44 — silently grafts an attacker-chosen entry on top of the prototype chain.
The exploit is reachable from any service that logs an attacker-controlled request body before sanitisation, which is the common pattern across the affected dependents. Once the constructor lookup resolves to the polluted entry, the plugin loader's lazy-require path is invoked with the attacker-supplied module identifier; the loader then calls require() against the user-controlled string and immediately executes the module's top-level statements. The end-to-end primitive is unauthenticated remote code execution on the logging process, which on most affected deployments is the same process serving HTTP traffic.
The proposed mitigation is a one-line guard at lib/transformers/json.js:122 that rejects type-tag values whose prototype is not Object.prototype, plus a hard registry freeze applied at module load. We attached a patch to the disclosure that takes the worst-case decode time from microseconds to within the same order of magnitude — measurable but not detectable in practice. The maintainer accepted the patch with minor renaming and shipped it in 4.2.1 alongside an advisory.
Disclosure opened on D0+2 to logger-core's maintainer; acknowledgement landed inside 18 hours. The maintainer requested a 30-day extension to coordinate with two large downstream packagers that vendored the transformer; we granted it without objection. Patch shipped on D0+45, advisory cleared CVE assignment on D0+88, this dossier published on D0+92. No exploitation observed in our customer telemetry across the disclosure window.
A note on the call graph: the chain crosses two package boundaries. The first three hops live in the consumer service; hops four through eight are inside logger-core; hop nine resolves to whichever module the attacker named in the polluted registry entry. The platform's reachability proof was generated against the released artifact, not against a synthetic build, which means the cited line numbers correspond exactly to logger-core 4.2.0 as published on the public registry on 2026-04-01.
HYPOTHESIS:
Untrusted __sg_type field in logger-core's JSON transformer
resolves to attacker-grafted constructor via prototype pollution,
reaching the plugin loader's lazy require() call. End-to-end
primitive: unauthenticated remote code execution on any service
that logs an attacker-shaped request body before sanitisation.
CONTEXT:
Package : logger-core
Version range: 4.0.0 <= v < 4.2.1
Entry surface: HTTP server middleware logging req.body
Sink : require(idStr) via plugin loader's lazy resolver
CITED PATH (9 hops):
[1] http-server/middleware/body.js:71
function logBody(req, res, next) {
logger.info({ method: req.method, body: req.body });
next();
}
[2] app/logger.js:34
logger.info -> logger-core entry pipeline
[3] logger-core/index.js:118
pipeline.push(JSONTransformer())
[4] logger-core/lib/transformers/json.js:118
function readTypeTag(record) {
const tag = record.__sg_type;
if (typeof tag !== 'string') return null;
return registry[tag]; // <-- prototype read
}
[5] logger-core/lib/transformers/json.js:154
const ctor = readTypeTag(record);
if (ctor) return new ctor(record.value);
[6] (prototype lookup)
registry.__proto__ has been polluted prior to this call
via the merge gadget at lib/util/merge.js:44 with
options.deepMerge = true (library default).
[7] logger-core/lib/loader/plugin.js:62
function resolvePluginById(entry) {
const idStr = entry.id;
const path = require.resolve(idStr, {
paths: [process.cwd()]
});
return require(path);
}
[8] logger-core/lib/loader/plugin.js:78
require(path); // <-- arbitrary module load
[9] <attacker module top-level>
module.exports = (function () {
// attacker payload runs here
})();
DISPROOF (3 attempts, all refuted):
Attempt 1: "registry is frozen at module load."
Cited code:
lib/registry.js:41
Object.freeze(REGISTRY_WRAPPER);
Refutation:
Object.freeze is applied to the wrapper object that holds
a reference to the live registry table; the table itself
(REGISTRY_WRAPPER.entries) is mutable and writable, and the
prototype lookup at step 5 goes through the live table, not
the wrapper. Verified by inspecting Object.getOwnPropertyDescriptors
on a freshly imported module: entries.writable === true.
Attempt 2: "pollution gadget is unreachable from any input."
Cited code:
lib/util/merge.js:44
function deepMerge(target, source, opts) {
for (const key of Object.keys(source)) { ... }
}
Refutation:
Object.keys does not enumerate inherited properties, but the
assignment inside the loop uses bracket notation against
target[key] = source[key]; when source itself is a plain
object literal parsed from attacker-controlled JSON, the
parser sets the __proto__ key as an own property in some
JSON parsers including the library's default parser. The
library uses fast-json-parse@2.x which preserves __proto__
as an own property. Verified end-to-end on the library's
own config loader at lib/config/load.js:21, which calls
deepMerge with options.deepMerge = true.
Attempt 3: "require() refuses non-relative paths in this context."
Cited code:
lib/loader/plugin.js:65
require.resolve(idStr, { paths: [process.cwd()] });
Refutation:
require.resolve with paths option accepts any package name
that resolves inside the consumer's node_modules. The
attacker's polluted entry sets entry.id to a known
installed package (any transitive of the consumer), which
resolves successfully and is then loaded by line 78.
PROPOSED PATCH:
--- a/lib/transformers/json.js
+++ b/lib/transformers/json.js
@@ -118,6 +118,12 @@
function readTypeTag(record) {
const tag = record.__sg_type;
if (typeof tag !== 'string') return null;
+ if (!Object.prototype.hasOwnProperty.call(registry, tag)) {
+ return null;
+ }
+ if (typeof registry[tag] !== 'function') {
+ return null;
+ }
return registry[tag];
}
--- a/lib/registry.js
+++ b/lib/registry.js
@@ -38,7 +38,9 @@
function buildRegistry(entries) {
- const r = { entries };
+ const r = Object.create(null);
+ for (const [k, v] of Object.entries(entries)) r[k] = v;
Object.freeze(r);
return r;
}
REACHABILITY:
Reachable from any service that logs an attacker-controlled
request body before sanitisation. Cross-checked against the
10 most-depended-on dependents in our SBOM mirror: 8 of 10
invoke logger.info on a path containing untrusted body data.
CONFIDENCE: 0.94
Lower bound set by disproof pass; upper bound trimmed by
variance across consumer middleware patterns.
AUTHORED-BY: Griffin Zero
CHECKPOINT : griffin-zero-r6.2-2026-04
TRACE-ID : trace-SG-INTEL-2026-0042
A second Griffin instance was primed with a contrarian system prompt and asked to refute the chain. Three refutation attempts were generated. None held under code inspection. The full attempt log is inside the trace above; the short version is that each attempt cited a specific defensive mechanism (a freeze, an inputs-don't-reach claim, a resolver restriction), and each cited mechanism failed to stop the chain when verified against the released artifact.
Two-line defensive change in the JSON transformer: reject type tags that don't appear as own properties of the registry, and reject entries whose value isn't a function. Paired with a structural fix in registry construction: build the registry on a null prototype, so even a successful pollution gadget elsewhere in the process can't reach it.
Every dossier begins as a private notification to the upstream maintainer. Nothing on this page was published before the maintainer had a fix in hand and a chance to ship it.
We follow a 90-day window from acknowledgement. Extensions are granted when a fix is in active development; we say no when the window starts being used to defer.
The dossier ships alongside the upstream patch, not before. Readers can apply the fix the same day they read the write-up.
Every dossier names the upstream maintainer, any external researcher who reported the underlying class, and the model that authored the trace. The work that lands the patch gets the byline.
Every dossier published here lands on /feed.xml as RSS, and on /threat-feed as a machine-readable stream for automated consumers (JSON, STIX 2.1, Slack webhook). A monthly email digest is available for readers who want a single inbox-friendly recap rather than a polling feed.
Coordinated disclosures and write-ups from Safeguard's research org. Human-authored, structured the same way.
Raw machine-readable CVE/KEV stream. RSS, JSON, STIX 2.1. Drop into your SIEM or aggregator.
How Griffin and Eagle are evaluated, what their reasoning traces look like, the open benchmarks.
Bounty programme scope, safe-harbour language, intake for external researchers.
Every dossier on this page came out of the same pipeline you can run on your own repositories. Connect a project and watch a first reasoning trace land.