AI Security

Dynamic Dispatch: Griffin AI vs Mythos

Dynamic dispatch hides real exploits behind indirection. Griffin AI models the dispatch; Mythos-class tools guess. That gap changes outcomes.

Shadab Khan
Security Engineer
7 min read

Dynamic dispatch is where most reachability analyses quietly lose accuracy. A call site that invokes a method on an interface, a reflective lookup, a plugin registry, a higher-order function, or a decorator-resolved route all produce call edges that are not visible in the source of the calling file. The caller says "call this method"; the actual callee is determined at runtime by the receiving type or the registry state. If the analysis cannot resolve that indirection, it cannot claim a reachable path. This post walks through how Griffin AI handles dynamic dispatch and how Mythos-class pure-LLM tools approach the same problem without a grounded graph.

Why dynamic dispatch matters

Reachability analysis cares about dynamic dispatch because most real application code leans on it. Every framework router is a form of dispatch; every dependency injection container is a form of dispatch; every plugin system is a form of dispatch. Object-oriented patterns, functional patterns, and metaprogramming patterns all introduce edges that a naive static analyzer cannot see.

The impact on the CVE stream is visible. CVE-2021-44228 (Log4Shell) exploited JNDI lookup, a runtime dispatch through a naming service. CVE-2017-5638 (Struts OGNL) exploited runtime expression evaluation, another form of dispatch. CVE-2022-22965 (Spring4Shell) hinged on reflective property access. In each case, the exploit path contained a dispatch step that any static analyzer without a dispatch model would treat as a dead end.

Griffin AI's dispatch model

Griffin AI resolves dynamic dispatch through a combination of type analysis, framework modeling, and symbolic evaluation. For method calls on interface or abstract types, the analyzer enumerates the possible concrete receivers based on type inference and class hierarchy, then treats each as a potential callee with appropriate probabilities. For calls through higher-order functions, it tracks the function values that flow into the call site through its taint-like forward analysis. For reflection and string-indexed dispatch, it uses pattern recognition and framework registries to identify the callees.

The resulting graph is over-approximate in a controlled way. Griffin includes edges for possible callees even when it cannot prove the exact one, and it annotates those edges with confidence metadata. The LLM's downstream reasoning uses that metadata to communicate uncertainty when it matters. A path that crosses a high-confidence dispatch edge is treated differently from a path that crosses a low-confidence one.

Framework-aware dispatch is handled separately. Griffin ships models for Express, Fastify, Koa, Next.js, NestJS, Django, Flask, FastAPI, Spring, Spring Boot, Rails, Laravel, Gin, Echo, and a growing list of others. For each framework, the model knows how routes are registered, how middleware is chained, how dependency injection resolves, and how event handlers are dispatched. Those models turn framework indirection into explicit graph edges.

Mythos-class behavior on dispatch

Mythos-class pure-LLM tools do not have an explicit dispatch model. They rely on the LLM's pattern recognition to fill in the gaps. For common patterns, the LLM often gets this right; it knows that app.get('/users', handler) in Express registers handler for GET requests to /users, and it can follow that reasoning in prose. For less common patterns, the guesses become less reliable.

The failure modes cluster around three behaviors. The first is silent truncation, where the LLM stops following the path at a dispatch site because it cannot confidently resolve the callee, and the output simply does not mention the unresolved step. The second is hallucinated resolution, where the LLM produces a confident name for the callee that does not match any real function in the codebase. The third is generic pattern assertion, where the LLM declares the dispatch "standard" without actually identifying the specific callee.

None of the three behaviors is good for a security queue. Silent truncation means missed findings. Hallucinated resolution means findings that point at the wrong code. Generic pattern assertion means findings that cannot be actioned.

A concrete case: plugin-based routing

Consider a Fastify application that uses a plugin to register a suite of admin routes. The plugin reads a configuration file, iterates over the entries, and calls fastify.route(config) for each one. One of the routes has a vulnerable handler that calls an outdated JSON parser.

Griffin's dispatch model knows that fastify.route registers a handler for a method and path derived from the config object. It follows the edge from the plugin's iteration into each handler, including the vulnerable one. The call graph contains the edge from the HTTP entry to the specific handler, and the taint graph confirms that the request reaches the vulnerable parser.

A Mythos-class tool sees fastify.route(config) inside a loop. Unless the config is a literal in the same file, the LLM cannot tell which route is being registered. It often falls back to a generic warning about Fastify plugins, which is not useful when the actual vulnerable route is specific. Or it ignores the plugin entirely because it cannot follow the dispatch, which produces a false clean report.

Object-oriented dispatch

Java, C#, Kotlin, and Scala applications lean heavily on interface-based dispatch. A service interface has multiple implementations; the concrete implementation is chosen by the DI container at startup. Griffin's type analysis identifies the set of candidate implementations and, where possible, narrows them to the one actually wired in the container. The reachability graph reflects the specific binding used in the application.

Mythos-class tools without a type inference engine tend to either reason about one implementation (usually the first one the retrieval layer surfaces) or refuse to decide. Neither is correct for a specific finding. An audit trail that says "this might be reachable via some implementation of UserService" is not something a security engineer can act on.

Duck-typed and prototype-based dispatch

JavaScript, Python, and Ruby add extra complexity because dispatch is based on runtime shape rather than declared type. Griffin's analyzers perform shape inference to approximate the runtime receiver and use framework-specific heuristics (for example, Django's model manager patterns or Express's this binding conventions) to resolve common cases. For patterns that cannot be resolved, Griffin records explicit unresolved markers so the LLM can communicate the uncertainty rather than hide it.

Mythos-class tools perform this shape inference implicitly inside the LLM, and the inference is lossy. The 2026 Q1 Griffin benchmark recorded a 31 percentage point advantage for grounded dispatch handling versus pure-LLM inference on JavaScript codebases, the largest single gap in the benchmark.

The uncertainty signal

Grounded analysis lets an engineer see where the uncertainty lives. A Griffin finding that crosses a low-confidence dispatch edge labels that edge explicitly; the engineer can decide whether to invest in deeper verification or defer the finding. A Mythos-class finding does not carry that signal, because the uncertainty is baked into the LLM's prose rather than the underlying structure. Teams end up treating all findings as equally uncertain, which is worse than having labeled uncertainty.

How Safeguard Helps

Safeguard renders Griffin AI's dispatch edges as first-class elements of the reachability path. When a finding crosses a virtual method call, a framework route, or a plugin registration, the console shows the candidate receivers, the evidence used to resolve the dispatch, and the confidence attached to each edge. Security engineers can audit a dispatch resolution in seconds rather than by hand-walking the code. That transparency is what makes grounded dispatch analysis trustworthy in audits and post-incident reviews.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.