The Model Context Protocol is transport-agnostic by design. The JSON-RPC 2.0 message envelope travels over whatever pipe you configure, and the specification carefully avoids baking assumptions about network topology into the wire format. That neutrality is one of the protocol's strengths, but it means transport decisions are not made for you. Picking stdio versus streamable HTTP versus a custom WebSocket shim is not a throwaway configuration choice -- it determines your authentication model, your network boundary, your observability story, and most of your attack surface.
I have watched teams deploy MCP servers with production workloads riding on whichever transport the first example they found happened to use. Stdio servers exposed through remote-execution shims. Streamable HTTP endpoints deployed without TLS because the first Getting Started guide omitted it. WebSocket variants built from half-remembered SSE tutorials. Every one of these is recoverable, but only if you understand what each transport actually guarantees.
Stdio: The Default, the Quietly Dangerous One
The stdio transport is the default in most MCP SDKs because it is the simplest to reason about. The client spawns the server as a child process, writes JSON-RPC messages to the server's stdin, and reads responses from the server's stdout. The 2025-06-18 specification defines stdio messages as newline-delimited JSON, with no additional framing.
Stdio's security model is "whatever the operating system gives you between parent and child processes." That model is strong in one specific case -- when the client and server run on the same machine, under the same user, with no network hop -- and essentially meaningless elsewhere. Teams get in trouble when they wrap a stdio server in a shim that pipes it over SSH, a TCP socket, or a container exec call. At that point the stdio transport is carrying messages across a boundary it was never designed to defend.
The real stdio hazards are subtler than network exposure. A stdio server inherits its parent's environment variables, meaning secrets set in the client's process are readable by the server unless explicitly scrubbed. Stdio servers typically run with the client's file system permissions, which is convenient until a compromised server starts reading ~/.ssh. And stdio provides no native mechanism for the client to prove its identity to the server or vice versa -- any process that can invoke the server binary is "the client." If you run stdio, constrain it: dropped privileges via seccomp or AppArmor, environment scrubbing before spawn, and a filesystem sandbox scoped to the server's actual needs.
Streamable HTTP: The Modern Default for Network Deployments
The streamable HTTP transport replaced the earlier HTTP+SSE design in the 2025-03-26 revision and has stabilized through 2026-03-05. It uses a single HTTP endpoint that accepts POST requests for client-to-server messages and returns server-to-client messages either as a JSON response or as a Server-Sent Events stream, depending on whether the server has async notifications to deliver. The Mcp-Session-Id header carries session identity across requests, and the Mcp-Protocol-Version header negotiates protocol revision.
Streamable HTTP is where TLS becomes a first-class concern. The MCP specification does not mandate TLS, but in practice any streamable HTTP deployment that carries production traffic must use HTTPS with modern ciphers -- TLS 1.3 where possible, TLS 1.2 with AEAD cipher suites as a fallback, and certificate pinning for high-sensitivity clients. The Mcp-Session-Id must be treated as a bearer credential: random, opaque, at least 128 bits of entropy, and never logged in plaintext.
The transport supports both stateful and stateless server modes. Stateful mode maintains session state across requests on the server; stateless mode requires the client to re-initialize for each request. Stateless mode is cheaper to horizontally scale but shifts state responsibility to the client, which complicates replay protection. Stateful mode requires sticky sessions or a shared session store, both of which have their own failure modes. Most enterprise deployments I have reviewed land on stateful with a Redis-backed session store and a short TTL.
Origin header validation matters. The spec recommends that streamable HTTP servers validate the Origin header to prevent DNS rebinding attacks from browser-hosted clients. Servers that skip this check and accept requests from any origin are trivially exploitable by malicious JavaScript running in a user's browser.
WebSocket and Experimental Transports
A WebSocket transport appears in several SDK forks and community implementations, though it is not part of the core specification as of 2026-03-05. WebSocket carries JSON-RPC messages over a single long-lived connection, which eliminates the request-per-message overhead of HTTP but also loses the per-message authorization checkpoint that HTTP middleware provides. If you deploy a WebSocket MCP transport, the authentication challenge moves entirely to the connection handshake, which means compromised credentials give the attacker the full lifetime of the connection with no mid-flight recheck.
Experimental transports include Unix domain sockets (essentially stdio with file-system-scoped access control), named pipes on Windows, and gRPC bridges that translate JSON-RPC to protobuf. Each of these inherits the security model of its underlying protocol. Unix domain sockets with 0600 permissions are roughly equivalent to stdio security-wise but support multiple simultaneous clients. Named pipes have the same property on Windows but require careful DACL configuration to avoid accidentally granting access to the Everyone SID.
Transport-Level Observability
Every transport must produce enough signal for security monitoring. For stdio, that means process-level auditing -- execve records, file access traces, network syscalls. For streamable HTTP, it means standard HTTP access logs plus the MCP session ID, request size, response size, and request latency per JSON-RPC method. For WebSocket, it means per-frame logging, which is more expensive but necessary because there is no per-request signal otherwise.
Capturing the JSON-RPC method name at the transport layer is cheap and high-value. It gives your security team a baseline of "which MCP methods are being invoked against this server at what rate" without the expense of full payload inspection. Payload inspection, when you can afford it, catches the more interesting events -- parameter patterns, capability changes, tool-poisoning attempts -- but requires care because MCP payloads can contain user data that has its own regulatory footprint.
Choosing a Transport
For local developer tooling, stdio with environment scrubbing and filesystem sandboxing is usually right. For same-network deployments with mutual trust, Unix domain sockets edge out stdio by supporting multiple clients. For cross-machine or cross-tenant deployments, streamable HTTP over TLS 1.3 with short-lived session IDs is the default answer. WebSocket is appropriate when you need bidirectional streaming with minimal overhead and have a story for connection-lifetime credential rotation. Everything else is experimental and should be treated as such.
How Safeguard Helps
Safeguard's MCP proxy sits at the transport layer and normalizes observability across every transport option -- stdio exec traces, streamable HTTP access logs, and WebSocket frame records land in a common schema. We enforce TLS posture for HTTP-based transports, rotate session IDs according to configurable policies, and catch origin-header and session-binding failures before they reach the server. For stdio deployments, Safeguard's sandbox wrapper strips inherited environment secrets and scopes filesystem access to declared needs, making it practical to run untrusted servers without giving them the run of the host.