Best Practices

Pydantic v2 Security Implications

Pydantic v2 rewrote the core in Rust and changed validation semantics. Here is what that means for security-sensitive code, from input coercion to ReDoS exposure.

Shadab Khan
Security Engineer
6 min read

Pydantic v2 arrived in mid-2023 with a Rust-backed core (pydantic-core), a rewritten validation pipeline, and a long list of semantic changes from v1. It is faster, cleaner, and better designed. It is also, if you migrated carelessly, a potential source of validation surprises that matter for anyone using Pydantic to enforce security boundaries.

This is a year-on review of what v2 changed that matters for security-sensitive code. I have migrated three services and audited a dozen more; the patterns are consistent.

Coercion Behavior Changed, and That Matters

Pydantic v1 was permissive about type coercion. MyModel(age=42) and MyModel(age="42") both worked, with v1 silently converting the string to an integer. v2 tightened this in multiple subtle ways — integers still accept string inputs in "lax" mode but with more precise rules, booleans accept a specific set of strings ("true", "false", "yes", "no", "on", "off", case-insensitive) rather than any truthy value, and JSON vs. Python inputs follow different coercion paths.

For security, the important thing is that v2 has a first-class "strict" mode. Field(strict=True) or model_config = ConfigDict(strict=True) rejects coercion entirely. If you are using Pydantic to validate untrusted input at an API boundary, strict mode is often what you want. It means a client sending "42" for a field declared as int gets a 422 error rather than silent acceptance.

The v1 permissive behavior had real security consequences. I have seen cases where is_admin=True was checked via a boolean field that accepted the string "false" as True (because any non-empty string was truthy in some code paths). v2 with strict mode makes that class of bug harder.

Audit your Pydantic models during migration. Every field that validates security-relevant input should be reviewed for whether strict mode is appropriate.

The Rust Core Changes the Performance Envelope

pydantic-core is written in Rust and compiled per platform. Benchmarks show 5-50x speedups for validation. For an API that validates a million requests per day, this is a meaningful resource savings.

Security implications of the Rust core:

Memory safety is better. Rust's ownership model eliminates entire classes of memory corruption bugs that a C-based core could have had. Pydantic-core has had security issues but not memory-corruption issues.

Build reproducibility requires attention. Pydantic ships platform-specific wheels (musllinux, manylinux, macOS, Windows). Your Docker base image and your pinned wheel need to match. A lock file generated on macOS and used to build a Linux image with --require-hashes will fail unless your lock file includes the Linux wheel hash.

ReDoS exposure is different. The validation of string patterns (via constr(pattern=...) in v1 or Field(pattern=...) in v2) uses Rust's regex engine in many cases, which is linear-time by design and does not have the catastrophic backtracking behavior of Python's re module. This actually reduces ReDoS exposure for pattern-validated fields — a genuine security win.

But: Field(pattern=...) in v2 falls back to Python's re for some patterns the Rust engine does not support (backreferences, lookaheads). If your pattern uses those features, you are back on the Python engine and ReDoS is back on the table. Audit the patterns your models declare.

Did Your Custom Validators Survive?

v1's @validator and @root_validator decorators are deprecated. v2 uses @field_validator and @model_validator, with different signatures and mode arguments (mode='before', mode='after', mode='wrap').

During migration, custom validators are where bugs tend to slip in. The v1-to-v2 migration tool (bump-pydantic) handles the common cases, but corner cases — a validator that raised ValueError in v1 and is expected to raise PydanticCustomError in v2, or a validator that depends on pre=True behavior that has been renamed to mode='before' — can end up running in the wrong order.

A validator that is supposed to reject invalid input but silently no-ops because it is not being called is a security bug. After migration, every custom validator needs a negative test (input that should fail) in addition to a positive test. Do not assume the migration tool's output is correct without running those tests.

Does Strict Mode Work Everywhere?

Strict mode has edge cases. The one that surprised me most: Literal types are always strict in v2 for JSON inputs, but slightly more lenient for Python inputs. This matters for APIs that accept both JSON and dict inputs, because the same model can behave differently depending on the entry point.

Related: UUID, datetime, and Decimal fields have configurable strictness. Field(strict=True) on a datetime field rejects any string that is not strictly ISO 8601, which is usually what you want for a security-sensitive "expires at" field. Without strict, v2 accepts a range of datetime-ish strings that might not match your assumptions.

Run a property-based test (Hypothesis) against your validation models. Generate malformed inputs and check that validation rejects them. I have found bugs in every migration I have reviewed this way.

Serialization: json_schema_extra and Sensitive Fields

Pydantic v2's model_dump_json() replaces v1's json(). Default behavior is to include all fields. If you have sensitive fields — tokens, internal IDs, user email addresses — that should not be serialized in API responses, v2's exclusion is different.

Field(exclude=True) excludes from serialization. SecretStr and SecretBytes redact their values in str(), repr(), and serialization. For JWT tokens, internal auth headers, and similar, use SecretStr.

The subtle bug: model_dump() (without _json) returns a dict with SecretStr values as SecretStr objects, not redacted strings. If you then json.dumps that dict with a custom encoder, you might accidentally serialize the secret value. Use model_dump_json() for the safe path, and if you need the dict, remember to call .get_secret_value() consciously.

Pydantic CVEs Worth Tracking

Pydantic itself has been remarkably CVE-light, which is a testament to how seriously the Pydantic team takes security. CVE-2024-3772 was a ReDoS in the EmailStr validation for v1 (not v2), affecting versions prior to 1.10.15. The v2 email validation uses a different path and was not vulnerable.

Pydantic v1 is now in maintenance mode. v1.10 receives security fixes but no new features. If you are still on v1, plan your migration. Running v1 indefinitely means your app is increasingly divergent from the active v2 codebase, and newer Pydantic-aware libraries (newer FastAPI, SQLModel, beanie, etc.) target v2.

Settings and Environment Variables

BaseSettings moved to a separate package in v2 (pydantic-settings). The behavior is mostly compatible, but environment variable handling changed. env_prefix handling, nested model env parsing, and .env file parsing have subtle differences.

If your production secrets are loaded via BaseSettings, re-test thoroughly. A misconfigured env_prefix that causes a secret to not be loaded is a denial-of-service (the app crashes on missing config). A misconfigured case-sensitivity setting that causes a secret to be loaded from the wrong env var is worse.

How Safeguard Helps

Safeguard's reachability analysis maps Pydantic CVEs against the specific validator types and fields your models use, so a v1 EmailStr ReDoS does not produce noise if you do not use EmailStr anywhere in reachable code paths. Griffin AI drafts migration PRs from Pydantic v1 to v2, including validator decorator updates, serializer signature changes, and BaseSettings relocation. SBOM generation captures both pydantic and pydantic-core as distinct components with their own advisories and attestations. Policy gates can enforce that security-sensitive APIs declare strict=True on their input models and that fields containing secrets use SecretStr rather than plain str.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.