Best Practices

Flask Application Security: A Deep Dive

Flask gives you room to make mistakes. This is a long look at the patterns that keep Flask apps safe in 2023, covering sessions, extensions, Werkzeug, and Jinja.

Shadab Khan
Security Engineer
7 min read

Flask is the Python web framework that trusts you. It hands you a request object, a routing decorator, and a template engine, and then it gets out of your way. That minimalism is why I love it and why I have audited so many insecure Flask apps. Django makes it hard to turn off CSRF protection by accident. Flask makes it hard to remember you ever needed it.

This is the guide I give to teams inheriting a Flask codebase from someone who has left the company.

Session Cookies Are Signed, Not Encrypted

The single most common misconception I encounter: Flask sessions are secret. They are not. Flask's default session uses itsdangerous to sign the session cookie, but the payload is base64-encoded JSON that anyone with a browser can read.

If you store a user_id in the session, that is fine. If you store an OAuth access token, a credit card number, or a reset password hash — as I have seen in three separate codebases — you have a data exposure bug. The client cannot modify the cookie without breaking the signature, but they can read everything.

For anything sensitive, use server-side sessions. Flask-Session is the standard extension. Redis or Memcached backs the store, the client only receives an opaque session ID, and the sensitive data never leaves the server. Alternatively, use SESSION_COOKIE_SAMESITE='Lax' with short-lived access tokens passed in headers rather than cookies.

The SECRET_KEY for session signing should be long (32+ bytes from os.urandom), loaded from the environment, and rotated on a schedule. CVE-2023-30861 — the Flask session cookie caching bug when using SESSION_COOKIE_SECURE=False with a caching proxy — reminds us that even the session mechanism itself is not immune to subtle issues.

Werkzeug Is Where the CVEs Live

Flask is a thin layer over Werkzeug. Most of the CVEs that matter to Flask users are in Werkzeug. 2023 was busy.

CVE-2023-23934 was a cookie parsing issue where leading dots on cookie names could confuse the parser, enabling some session fixation scenarios. CVE-2023-25577 was a resource exhaustion in multipart form parsing — an attacker could send a form with many tiny parts and eat CPU. CVE-2023-46136 was another multipart DoS, this time via disproportionately large file headers. All three require running a current Werkzeug.

The lesson is that pinning Flask without pinning Werkzeug is pointless. Your lock file should pin both exactly, and your dependency update cadence needs to treat a Werkzeug advisory as "patch within the week," not "maybe next quarter."

If you are still running Flask 1.x with Werkzeug 1.x, you are on unsupported software. Flask 2.3 dropped Python 3.7, Flask 3.0 dropped Python 3.8, and each major has tightened defaults.

How Safe Is Your Jinja Usage?

Jinja2 auto-escapes HTML by default in Flask, which is why Flask XSS is usually found in templates the developer wrote with {% autoescape false %}, or in fields rendered with |safe, or — most commonly — in JavaScript contexts where Jinja is dropping untrusted strings into a <script> block.

Auto-escape only escapes for HTML. Inside <script>, it does not escape the characters that matter for JS string injection. The fix is {{ value|tojson }}, which produces a valid JSON literal that is safe in JS context. I have audited codebases where the window.config = {{ user_data|safe }} pattern was used, and user_data included strings from the database that had never been sanitized for JS contexts.

SSTI (server-side template injection) is the nightmare scenario. If user input ever reaches render_template_string as the template argument, an attacker can read your environment, execute Python, and pivot to RCE. The classic payload is {{ config.items() }} to dump the Flask config including SECRET_KEY. Audit every render_template_string call. If the template string has any user input anywhere in it, you have an SSTI and you must refactor.

Extensions: Who Is Reviewing Them?

Flask extensions are small Python packages, often maintained by one person, often unreviewed. Flask-Login is solid. Flask-SQLAlchemy is solid. Past that, the quality drops fast.

Flask-RESTful is in maintenance-only mode. Flask-RESTPlus was forked because its maintainer disappeared, and the fork (flask-restx) has its own pace. Flask-Security has a reboot (Flask-Security-Too) because the original stagnated. If your auth layer depends on an extension that has not seen a commit in two years, you are one CVE away from an emergency migration.

When I review a Flask codebase, the first thing I do is look at every imported extension and check PyPI for last release date and GitHub for open security issues. This is a ten-minute exercise that has surfaced real problems every time I have done it.

CSRF, and Why Flask-WTF Is Not Optional

Flask itself does not ship CSRF protection. You have to add it. Flask-WTF is the standard, and it hooks CSRF tokens into form rendering and validation. If you are building a JSON API consumed by a SPA, you still need CSRF protection on state-changing endpoints — or you need to be using SameSite=Strict cookies, or you need to be using a header-based auth scheme like Authorization Bearer tokens rather than cookies.

The failure mode I see: a team builds a Flask API, uses Flask sessions for auth, does not add CSRF protection because "it is an API," and now their production endpoint is vulnerable to CSRF from any origin the user visits. The API/form distinction does not exempt you from CSRF if you use cookies for auth.

Debug Mode in Production Is a Backdoor

Flask's debug mode runs the Werkzeug debugger, which by default includes an interactive Python console exposed on every traceback page. If debug mode reaches production, anyone who can trigger an exception gets a Python REPL on your server.

This is CVE-2019-1010083 in spirit, although the Flask docs have warned against it for years. The new twist is that FLASK_DEBUG=1 and FLASK_ENV=development are easy to leak via environment misconfiguration in Kubernetes, Docker Compose, or .env files. I have seen DEBUG=True in production because the .env.production file was never created and the app silently fell back to .env.development.

Add a startup assertion. If app.debug is True and the environment is anything other than local, crash loudly.

Input Validation Is Still Your Job

Flask does very little input validation. Query strings, JSON bodies, form fields — these arrive as strings and Flask hands them to you. Use marshmallow, pydantic, or Flask-Pydantic to validate, coerce, and reject. The most common injection patterns I see in Flask are SQL injection via string interpolation into raw queries, and path traversal via send_file(user_input). Both are trivial to avoid with schema validation at the route boundary.

How Safeguard Helps

Safeguard's reachability analysis traces which Werkzeug or Jinja CVEs actually touch your Flask routes, so a multipart DoS in Werkzeug is not a paging incident if all your endpoints are JSON-only. Griffin AI reads your Flask blueprints and extensions to recommend ordered upgrades — Werkzeug first, then Flask, then the extensions that depend on both. SBOM coverage captures the extension soup that every real Flask app accumulates, including the unmaintained ones you inherited. Policy gates can block a production deploy if debug mode is enabled, if an extension crosses an age threshold without releases, or if a Werkzeug CVE above a CVSS threshold remains unpatched.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.