Most dependency visualizations are pretty and useless. You generate a graph, you get a hairball, and you close the tab. The dependency graphs for a typical npm project are so dense that you cannot see structure; everything is connected to everything. PyPI is similar. Java is worse.
Go is different. The Go module graph for a non-trivial service is usually small enough that a human can actually look at it and reason about it. Since Go 1.17, released August 2021, introduced module graph pruning, the graph a build actually uses is often an order of magnitude smaller than the full transitive closure. This means visualization can be a real security review tool in Go in a way it is not in other ecosystems. I want to argue for this and show the patterns I use.
What does the Go module graph actually look like?
Run go mod graph in any Go project. The output is a flat text file with one line per edge: module1@version module2@version. A typical service I work on produces 400 to 2,000 lines. An npm package-lock.json for a comparable service can be 30,000 lines.
The graph is smaller for several reasons. The standard library is large and many modules rely on it without needing third-party equivalents. Go's minimum version selection tends to produce one version per module rather than multiple versions coexisting. And pruning restricts the graph to what is actually needed for the current build.
This smallness is why graphs are useful for security. When you have 500 modules total, you can look at a visualization and ask, "why is this here?" You can do that for every node. You cannot do that for 30,000.
What tools produce useful visualizations?
go mod graph is the primitive. Everything else builds on its output. I use a few tools depending on context.
modgraphviz ships with the golang.org/x/exp repo and converts go mod graph output to DOT format for Graphviz. It is old, simple, and still works. The output is a static image that I export to PDF for review.
gomod from github.com/Helcaraxan/gomod produces nicer output with analysis features, including detecting circular dependencies and unreachable modules. Version 0.7.0 is current as of early 2024.
For interactive exploration, Safeguard's dependency graph view, which I obviously recommend, loads the graph in a browser and lets you filter by severity, license, or maintainer. For purely DIY setups, d3-force with a custom loader over go mod graph output produces interactive visualizations that are surprisingly useful.
What should you look for?
Visualizations reveal structural properties that are hard to see in flat lists.
First, unexpected centrality. A module that sits at the heart of your graph, with many inbound and outbound edges, is a high-value target. If github.com/obscure-person/magic-util has that position, the question is: why? Usually the answer is that it was added as a convenience years ago and now everything depends on it. That is a supply chain risk worth pricing.
Second, redundant implementations. If you see two YAML parsers, two UUID generators, two structured loggers, that is a bloat signal. Each adds attack surface. Consolidate.
Third, abandoned branches. Modules whose inbound edges all come from one other module, and whose outbound edges go to modules you do not otherwise use, are often indicators of an orphaned transitive dependency. If you can remove the parent, the orphaned branch goes with it, which shrinks your graph.
Fourth, version skew. The graph view makes it easy to spot modules included at multiple versions (via replace or in workspaces), which is sometimes a sign of unintentional divergence.
How does this intersect with vulnerabilities?
Overlay CVE data on the graph and the hotspots jump out. A module with five open CVEs that sits at the root of a subtree of twenty other modules is far more concerning than the same module sitting at a leaf. govulncheck tells you reachability, which is a different analysis, but graph position tells you blast radius.
I built a small tool that colors nodes by their highest-severity unresolved CVE and sizes them by how many leaves depend on them. The resulting image is a map of risk. The largest, reddest nodes are where to focus first.
Concrete example: CVE-2022-41723
In February 2023, CVE-2022-41723 was disclosed in golang.org/x/net/http2. The affected versions were before v0.7.0. Many projects pulled this module as a transitive dependency of something like google.golang.org/grpc, which pulled it via golang.org/x/net directly.
A flat scanner told you: you have a vulnerable version. A graph view told you: it is a transitive dependency of these three direct dependencies, and it appears in your build because of that grpc usage. This is useful context for remediation because you know that upgrading grpc is the correct fix, not pinning x/net directly.
Minimum version selection, visualized
Go's minimum version selection picks, for each module, the highest version required by any module in the graph. This is deterministic and reproducible, but it has surprising consequences. A test-only dependency can force an upgrade to a transitive that affects production code, because MVS considers the whole graph.
A visualization that highlights which node in the graph is forcing the selected version of each module is invaluable. go mod why -m example.com/foo gives you a path, but seeing it in a graph makes it clearer. "This test dependency is forcing this production module to v1.5.0 because of this MVS path." Then you can decide whether that is acceptable.
What about at organizational scale?
A single service's graph has 500 nodes. An organization with 200 services has 100,000 node-service pairs, with heavy overlap in the core. Visualizing this is where tooling matters.
I look at two views. First, per-service graphs for deep review. Second, an aggregated view that shows the most commonly used modules across the organization, colored by their risk profile. The aggregated view tells you where to invest security effort: the modules that appear in 150 of 200 services are your crown jewels. If one of them has a CVE, the blast radius is huge.
Safeguard's cross-project dependency view does this aggregation automatically, which is how I built up the intuition in the first place. Before Safeguard, I was scripting go mod graph across dozens of repos and joining the outputs in sqlite.
Visualizations and developer culture
There is a secondary benefit to visualizing dependency graphs: it changes how developers think about adding dependencies. When every go get shows up as a new node on a team-visible graph, the social cost of adding a throwaway dependency goes up. In several teams I have worked with, visualizing the graph led to voluntary reductions in dependency count because engineers realized how much they were pulling in for small convenience.
This is not a security control in the mechanical sense. But culture matters. A team that feels ownership over its graph is more likely to notice when something suspicious appears in it.
What do I keep on the wall?
For each service I review, I produce two graphs. A full graph colored by CVE severity, and a "surprises" graph that shows only modules pulled in through exactly one edge, sorted by how obscure the maintainer is. The second graph catches a surprising number of "why did we add this?" conversations. Both are regenerated weekly so drift is visible.
How Safeguard Helps
Safeguard produces dependency graph visualizations across every Go project in your tenant, with vulnerability, license, and maintainer metadata overlaid on each node. The cross-project view aggregates graphs to show which modules are most depended upon organization-wide, making it easy to spot crown-jewel dependencies whose compromise would cascade widely. Policy gates can alert when a PR adds a node that connects to a high-risk branch of the graph, and the weekly diff view shows how the dependency surface is evolving so that unwanted growth is visible before it calcifies.