In April 2025, the Open Source Security Foundation, Sigstore, NVIDIA, and HiddenLayer jointly announced the v1.0 release of the OpenSSF Model Signing specification — an industry standard for cryptographically signing AI models. The release builds on the model-transparency project hosted at github.com/sigstore/model-transparency, which leverages Sigstore's keyless signing infrastructure to attach verifiable signatures to ML model artifacts. NVIDIA had already begun signing all NVIDIA-published models in the NGC catalog with the specification since March 2025, becoming the first major model hub to offer cryptographic signing for hosted models. Google's blog post in April described the rollout as "Taming the Wild West of ML." This post explains what the specification actually signs, how downstream verification works, and where the gaps remain.
What does Model Signing v1.0 actually sign?
A signed model is more than a signed weight file. Models are usually a directory of artifacts: weight files (potentially many GB, split into shards), tokenizer files, configuration, model card, and sometimes example code. The specification defines a manifest format that hashes each file in the model directory and signs the manifest as a single unit. The signature attests to the entire bundle — not just any one component — so an attacker who substitutes the tokenizer or the configuration while leaving the weights unchanged still fails verification. The manifest format is JSON, similar to in-toto attestation structures, and the signature can be either keyless (via Sigstore's Fulcio) or traditional (X.509 certificate or raw public key).
How does keyless signing via Sigstore work for models?
Keyless signing is the Sigstore innovation that makes signing operationally tractable at scale. The signer authenticates with an OIDC identity provider (GitHub Actions, Google, Microsoft, etc.), Fulcio issues a short-lived certificate bound to that identity, the signer uses the cert's private key to sign the manifest, and the signature plus the certificate go to Rekor (the public transparency log). The private key is destroyed after use. Verification, then, checks: does the signature validate against the cert? Was the cert issued by Fulcio? Is the cert recorded in Rekor? Does the identity in the cert match an expected signer? For NVIDIA, the expected signer is the OIDC identity of NVIDIA's GitHub Actions publishing workflow — and that identity is what enterprise defenders pin to in their verification policy.
How do we actually verify a signed model?
The model-signing CLI (and the Python library) is the reference implementation. Verification in a CI pipeline or at model-load time:
# Verify a model directory signed via OpenSSF Model Signing
model_signing verify \
--model-path ./Llama-4-Scout-17B-16E/ \
--signature ./Llama-4-Scout-17B-16E.sig \
--identity "https://github.com/meta-llama/llama-models/.github/workflows/publish.yml@refs/heads/main" \
--identity-provider "https://token.actions.githubusercontent.com"
# Verify an NVIDIA NGC model
model_signing verify \
--model-path ./nvidia/Nemotron-4-340B/ \
--signature ./Nemotron-4-340B.sig \
--identity "nvidia-publishing@github-actions" \
--identity-provider "https://accounts.google.com"
The verification pins not just to a static key but to a specific publishing identity. An attacker who compromises an NVIDIA developer's laptop cannot forge a signature unless they also compromise the OIDC identity provider — a much higher bar than stealing a long-lived key. The Rekor transparency log provides a second check: any signature visible in your verification pipeline must also be visible in the public log, which means a backdated signature is detectable.
What did NVIDIA NGC actually adopt?
NVIDIA's NGC catalog began signing all NVIDIA-published models in March 2025. The signing happens in the NGC publishing pipeline; the signatures are distributed via Rekor and are also embedded as metadata in the NGC catalog API. NVIDIA's own LLM serving stack (TensorRT-LLM, Triton Inference Server) integrates verification at model-load time — though as the ShadowMQ disclosure later revealed, NVIDIA's own serving framework had unrelated RCE bugs that defender practice should not paper over. The lesson: model signing is necessary but not sufficient. A signed model containing a known-vulnerable architecture or trained with a poisoned dataset is still dangerous; the signature merely guarantees the artifact you have is the artifact the signer published.
Where are the gaps?
Three honest gaps in 2025-2026. First, Hugging Face is the largest model registry by volume and has not yet committed to OpenSSF Model Signing as a default. Hugging Face supports their own model commit signing (via GPG keys associated with users) but the schemes do not interoperate. Until Hugging Face adopts the OpenSSF spec or a compatible scheme, the largest model surface area is unsigned. Second, fine-tunes and adapters are particularly under-signed because the fine-tuning step is usually done by downstream consumers, not by the base model's publisher, so the signature chain breaks. Third, dataset signing is essentially unsolved — there is no equivalent specification for attesting that the dataset you trained on is the dataset the publisher distributed.
What should enterprises require in procurement?
For any AI model entering production, require: (a) a signed manifest using OpenSSF Model Signing v1.0 or equivalent, (b) the signing identity recorded in your model registry, (c) verification at model-pull time and at model-load time, (d) the public Rekor transparency log entry recorded in your AIBOM. For models that lack signing, require an internal signing step before promotion — pull from upstream, scan for known-bad content, compute and pin SHA-256, then sign with your internal signing identity. This produces a chain of custody even when the upstream signature is missing. The Coalition for Secure AI (CoSAI) published "Signing ML Artifacts" in November 2025 documenting the recommended practice; enterprise defenders should treat it as the reference implementation.
How does this interact with AIBOM formats?
The signing manifest and the AIBOM are complementary artifacts that describe different properties. The AIBOM (CycloneDX 1.7 ML-BOM or SPDX 3.0 AI profile) describes what the model is — its identity, training data, model card, and dependencies. The signing manifest describes that the artifacts are authentic — that the bits on disk match the bits the publisher published. The two should be linked: every model component in your AIBOM should carry a reference to the signing manifest, the Rekor entry, and the verification policy. CycloneDX 1.7 supports this via the component-level externalReferences block; SPDX 3.0 supports it via relationship triples to a separate signing element. The combined artifact set — AIBOM plus signing manifest — is what auditors and incident responders will request. Maintain them together, version them together, and verify them together at every deployment gate.
What is next on the standards trajectory?
OASIS Open is hosting the CoSAI work and is the likely standards body for the next maturation. Sigstore's model-transparency repository is the reference codebase and is evolving toward better fine-tune support and toward dataset signing. NIST has signaled (in the AI RMF 1.0 and subsequent profile documents) that model integrity attestation is a required control for high-risk AI systems; the NIST AI 600-1 generative AI profile published in 2024 references it. The trajectory is unambiguous: by 2027, unsigned models in regulated industries will be the operational equivalent of unsigned container images today — technically possible but professionally indefensible.
How Safeguard Helps
Safeguard requires OpenSSF Model Signing verification at every model pull and at every inference-runtime load by default, with signing identity pinned per upstream publisher. When a model is unsigned (Hugging Face being the most common case), the platform produces an internal signature as part of the promotion-to-production workflow, with the SHA-256 chain recorded in your AIBOM. Griffin AI ingests Rekor transparency log entries and matches them against your inventory, flagging any model whose Rekor entry does not match the expected publisher identity. Policy gates block deployments of unsigned models in production tiers, and the AIBOM records the full signing chain — publisher identity, certificate fingerprint, Rekor entry — so audit and incident response have a verifiable chain of custody. The result: the OpenSSF specification becomes operational policy in your environment, not just aspirational architecture.