An honest survey of what exists, what each tool actually solves, and where the gap is widest.
When I started researching cross-repo infrastructure dependency management, first as a practitioner hitting the problem at client sites, then as someone building a tool to solve it, I expected to find a crowded space. Dependency management is a well-understood problem in software engineering. Package managers have solved it for application code. Surely someone had solved it for infrastructure.
They hadn’t. Not because nobody tried, but because the tools that exist were built to solve adjacent problems. Each one is good at what it does. None of them answer the specific question that platform teams keep asking: if I change this shared module, image, or template, which repos break and who do I need to notify?
This post is a genuine attempt to map the landscape: what exists today, where each tool shines, and where it stops. I’ve used or evaluated most of these tools, talked with engineers who rely on them daily, and spent time in community discussions where people describe what’s working and what isn’t. I’ve tried to be fair. Every tool here solves a real problem for real teams. The point is not that they’re bad. It’s that the specific problem of cross-repo infrastructure dependency visibility sits in a gap between them.
How to think about the landscape
The tools people reach for when they encounter the dependency visibility problem fall into six categories. Understanding the categories matters more than evaluating individual tools, because the gap isn’t a missing feature in any one product. It’s a missing category.
- Service catalogs and developer portals — “what exists and who owns it”
- Dependency update automation — “keep everything on the latest version”
- Platform-specific dependency explorers — “see relationships within one ecosystem”
- Monorepo build tools — “solve the problem by putting everything in one repo”
- Security and compliance scanners — “find vulnerabilities and license issues across repos”
- DIY scripts and manual approaches — “build exactly what you need, maintain it forever”
Each category addresses a real need. Each one gets mistaken for a solution to the visibility problem. Let’s look at why.
Service catalogs: Backstage, Port, OpsLevel
What they do well
Service catalogs, with Backstage being the most prominent and Port and OpsLevel as managed alternatives, give engineering organisations a central place to answer “what services do we run, who owns them, and where do they live?” Backstage in particular has become the standard for developer portals at larger companies. It integrates with CI/CD, on-call systems, documentation, and cloud resources. It gives teams a single pane of glass for service ownership.
Port and OpsLevel take a similar approach with less self-hosting overhead. They provide scorecards, maturity tracking, and integrations with cloud providers. For organisations that need a developer portal, especially those with hundreds of services and unclear ownership, these tools are genuinely valuable.
Where they stop
The dependency model in service catalogs is declared, not discovered. In Backstage, you define a catalog-info.yaml per repo that describes the service, its owner, and its dependencies. This information is exactly as accurate as the last time someone updated that file.
In practice, catalog YAML goes stale quickly. Engineers update code. They don’t update the catalog. The dependency that got added three months ago isn’t in the YAML. The one that got removed six months ago is still listed. As one engineer described it to me, it’s “documentation with extra steps.”
This isn’t a flaw in Backstage; it’s a design choice. Service catalogs are built for service metadata (ownership, documentation links, runbook URLs), not for dependency graph accuracy. They’re excellent at answering “who owns this service?” They’re not designed to answer “which 40 repos consume this Terraform module at which version?”
Port has invested in auto-discovery features that pull some metadata from cloud resources and Git, which narrows the gap. But the cross-repo infrastructure dependency graph (Terraform modules sourcing other modules, Docker base images consumed by dozens of repos, CI templates included across the org) still requires either manual declaration or a separate discovery mechanism.
Who should use them
Any organisation over ~50 services that needs a central developer portal with ownership tracking, documentation, and service maturity scoring. They’re complementary to dependency visibility tooling, not a replacement for it.
Dependency update automation: Renovate and Dependabot
What they do well
Renovate and Dependabot solve a specific, well-defined problem: when a dependency has a new version available, automatically open a pull request to update it. Renovate in particular is remarkably flexible. It supports dozens of package ecosystems, custom versioning schemes, grouping strategies, and scheduling controls. For teams that want to stay current on their dependencies, Renovate is close to best-in-class.
Dependabot, integrated directly into GitHub, offers a lower-configuration path to the same outcome. It’s particularly strong for npm, pip, Go modules, and GitHub Actions workflows.
Both tools are excellent at what they do, and many teams should be using one of them regardless of what else they adopt for dependency visibility.
Where they stop
The core distinction is between prevention and visibility. Renovate and Dependabot solve prevention: they keep consumers updated so that version drift doesn’t accumulate. They react after a new version is published by opening PRs across consuming repos.
What they don’t provide is the pre-release view. Before you publish version 2.0 of your shared Terraform module, before you even cut the release, you want to know: who is consuming the current version? How many repos will be affected? Which teams own them? Are any of them pinned to a version that’s already three releases behind?
Renovate can tell you that a new version is available. It cannot tell you the blast radius of a change you haven’t published yet. These are different problems, and confusing them leads teams to believe they have dependency visibility when they actually have dependency automation.
There’s a subtler gap too: Renovate and Dependabot work at the package level, not the repo level. They know that package.json depends on @company/[email protected]. They don’t know that the repo containing that package.json also has a Dockerfile that pulls an internal base image, a CI config that includes a shared template, and a Terraform module that sources three other internal modules. The cross-ecosystem picture is outside their scope.
Who should use them
Every team that consumes shared dependencies and wants to avoid version drift. Use them alongside dependency visibility tooling, not instead of it.
Platform-specific explorers: HCP Terraform, Artifactory, GitHub Dependency Graph
HCP Terraform Explorer
HashiCorp Cloud Platform (HCP Terraform, formerly Terraform Cloud) includes a module explorer that shows which workspaces use which modules. If your entire Terraform workflow runs through HCP Terraform (plan, apply, state management, all of it) this gives you a reasonable view of Terraform-specific module relationships.
The limitation is scope. It only works for Terraform. It only works for workspaces managed by HCP Terraform. If your org has a mix of HCP Terraform, self-hosted Terraform runners, CI-driven applies, and Atlantis (and many do) you get a partial view. The Docker images, CI templates, Helm charts, and Ansible roles that are part of the same dependency graph are invisible.
For teams that are fully committed to HCP Terraform and only need Terraform module visibility, this is a solid native solution. For everyone else, it’s a slice of the picture.
JFrog Artifactory
Artifactory is a universal artifact repository. For Terraform, it can serve as a private module registry with metadata about published modules: versions, download counts, and some dependency information.
The gap is on the consumer side. Artifactory knows what modules have been published. It doesn’t reliably tell you which repos are consuming a given module at which version right now. The publishing-side view and the consuming-side view are different, and Artifactory was built for the former.
GitHub Dependency Graph and Dependabot Alerts
GitHub’s built-in dependency graph parses lock files and manifests to show what each repo depends on. It’s tightly integrated with Dependabot alerts for security vulnerabilities.
It’s good for application dependencies: npm, pip, Go modules, Ruby gems. It’s limited for infrastructure dependencies. Terraform module sources, Docker base image relationships, CI template includes, Helm chart dependencies. These aren’t part of GitHub’s dependency graph model. And the view is per-repo, not org-wide: you can see what one repo depends on, but you can’t easily ask “who across my org depends on this artifact?”
Who should use them
Teams that operate primarily within a single ecosystem and a single platform. If you’re a pure Terraform Cloud shop, HCP Explorer gives you module visibility. If you’re managing artifact publishing, Artifactory is the right tool for that. If you need per-repo application dependency visibility on GitHub, the built-in graph works. None of them provide the cross-ecosystem, org-wide view.
Monorepo build tools: Nx, Turborepo, Bazel
What they do well
This is the category that genuinely solves the dependency visibility problem, within its context. Monorepo build tools like Nx, Turborepo, and Bazel are built around a dependency graph. They know which packages depend on which. They can tell you exactly what’s affected by a change. They run only the tests and builds that are actually impacted.
Nx in particular is impressive here. Its nx affected command does precisely what platform teams wish they could do across a polyrepo: given a set of changed files, determine which projects in the repo are affected and need to be rebuilt or retested. It has a visual graph explorer, dependency analysis, and caching that skips unaffected work. Turborepo provides similar affected-project detection with a focus on speed and simplicity. Bazel takes it further with hermetic builds and fine-grained dependency tracking at the file level.
If your organisation operates as a monorepo, these tools give you dependency visibility almost for free. The graph is implicit in the repo structure, and the build tool maintains it automatically.
Where they stop
The limitation is the prerequisite: you have to be in a monorepo.
Most infrastructure teams dealing with the cross-repo dependency problem are not in a monorepo, and migrating to one is not a realistic option. An organisation with three hundred existing repos, multiple teams with separate access controls, compliance boundaries between business units, and repos inherited through acquisitions cannot consolidate into a single repository as a quarter-long project. It’s a multi-year architectural migration with significant operational risk.
Even organisations that want to move toward a monorepo often can’t do it all at once. They might consolidate application code but keep infrastructure repos separate. Or they might run a monorepo for one team while other teams stay on polyrepo. The dependency graph still crosses that boundary, and monorepo build tools only see what’s inside the monorepo.
There’s also a scope question. Nx and Turborepo are primarily designed for application code (TypeScript/JavaScript, with growing support for other languages). The infrastructure dependency graph, which includes Terraform modules, Docker base images, CI templates, Helm charts, and Ansible roles, doesn’t map cleanly onto a monorepo build tool’s model. Bazel is more general-purpose, but the overhead of Bazel adoption is substantial and rarely justified purely for dependency visibility.
One more nuance worth noting: even inside a monorepo, the build tool’s dependency graph tracks build-time relationships. It knows that package A imports package B. It doesn’t necessarily know that a Dockerfile in one directory pulls a base image that’s built by a CI job defined in another directory, or that a Terraform module sources another module via a git URL pointing at a subdirectory. The infrastructure dependency surface is different from the application dependency surface, even within a single repo.
Who should use them
Any team already operating in a monorepo, or actively planning a migration to one. Nx and Turborepo are excellent tools. The point is not that monorepos are wrong. It’s that “switch to a monorepo” is not actionable advice for the organisations that feel the cross-repo dependency problem most acutely.
Security and compliance scanners: Wiz, Snyk, SBoM tools
What they do well
Security scanners like Wiz and Snyk, along with the broader SBoM (Software Bill of Materials) ecosystem, solve an important and well-funded problem: finding known vulnerabilities (CVEs) and license compliance issues across an organisation’s software supply chain.
Wiz in particular has an attractive deployment model for this discussion: org-level installation via webhook or cloud connector, with no per-repo configuration needed. It scans broadly and automatically. The install pattern is exactly right for dependency discovery.
Snyk provides deep analysis of application dependencies, container images, and infrastructure-as-code configurations. It’s strong on identifying security risks within dependency trees.
Where they stop
These tools are built around a security and compliance model, not a dependency visibility model. The question they answer is “do any of my dependencies have known vulnerabilities?”, not “which repos depend on this internal module and what’s the blast radius if I change it?”
The distinction matters in practice. Wiz can tell you that a container image has a CVE. It doesn’t tell you that 30 repos across your org use that image as a base, that 12 of them are pinned to an outdated version, and that updating the image will require coordinated changes across four teams. Snyk can tell you that a Python package has a license issue. It doesn’t show you the cross-repo dependency graph between your internal Terraform modules, CI templates, and Helm charts.
There’s also a scope gap: most security scanners focus on application dependencies (npm packages, Python libraries, container images) and cloud configuration. Terraform module-to-module relationships, CI template includes, and Ansible role dependencies are generally outside their scanning model.
The pricing model is also relevant. Enterprise security platforms are priced for security budgets, not platform engineering budgets. Using a security scanner as a general-purpose dependency mapping tool would be expensive overkill for the dependency visibility problem specifically.
Who should use them
Any organisation that takes software supply chain security seriously, which should be everyone. These tools are essential for what they do. They just don’t do dependency visibility, and shouldn’t be expected to.
DIY approaches: grep, scripts, and custom crawlers
What they do well
This is the category that tells you the most about the problem. When multiple independent engineers, with no coordination between them, build nearly identical tools, you’re looking at a genuine unmet need.
The common pattern: a scheduled job that shallow-clones every repo in the org (or uses the GitLab/GitHub API), greps or parses Dockerfiles, Terraform source blocks, and CI include directives, dumps the results to SQLite or a spreadsheet, and maybe renders a graph with something like Observable Framework or a simple web page.
These solutions work. They prove the approach is sound. Some of them are impressively capable, handling hundreds or even thousands of repos, tracking multiple file types, producing usable output. The engineers who build them solve their team’s immediate problem, often in a day or two of effort.
Where they stop
Bespoke solutions have a consistent set of failure modes:
They go stale. A nightly cron job means the graph is up to 24 hours out of date. A manually-triggered script means it’s as fresh as whoever last remembered to run it.
They’re fragile. When a repo is renamed, archived, or changes its structure, the script breaks. Handling these edge cases, and the dozens of others that emerge over time, requires ongoing maintenance that nobody budgeted for.
They’re not generalised. Each one handles the specific file types and repo structures of one org. Moving to a new org, or adding support for a new ecosystem, means rewriting.
They don’t survive their creator. The engineer who built the script in a weekend moves to another team or another company. Nobody else understands how it works. Six months later, it’s running but nobody trusts the output. Or it’s silently broken. Or it’s been turned off.
They lack a query interface. Most DIY solutions produce a static output. Asking “what’s the blast radius of changing module X?” means writing a new query or script, not clicking a button.
The meta-point: the fact that people keep building these, and that the solutions converge on the same architecture, validates both the problem and the general approach. What’s missing is a product that does it well, keeps doing it, and handles the long tail of edge cases that a weekend hack can’t.
Who should use them
Teams with a specific, narrow need (maybe just Terraform modules across 30 repos) and an engineer willing to maintain a custom script. For anything broader or longer-lived, the maintenance cost exceeds the build cost within months.
The gap in the landscape
Line up the categories side by side and the gap becomes clear:
Service catalogs know what exists and who owns it, but dependencies are declared, not discovered, and go stale.
Dependency update tools keep consumers current, but only react after a version is published and don’t show blast radius beforehand.
Platform-specific explorers show relationships within one ecosystem, but are blind to everything outside it.
Monorepo build tools solve the problem structurally, but only if you’re already in a monorepo. Most infrastructure teams aren’t.
Security scanners find vulnerabilities across repos, but don’t map infrastructure dependency graphs.
DIY scripts prove the approach works, but are bespoke, brittle, and don’t survive their creator.
The missing category is cross-ecosystem infrastructure dependency visibility: a tool that automatically discovers how repos depend on each other across all the ecosystems a platform team uses (Terraform, Docker, CI templates, Helm, Ansible, Python, Go, npm, Kubernetes), keeps that graph current without manual maintenance, and makes it queryable.
Specifically, the gap has these characteristics:
Auto-discovered, not declared. The dependency graph must be built from what’s actually in the repos: the Terraform source blocks, the Dockerfile FROM statements, the CI include directives. Any approach that requires humans to maintain a manifest will go stale. The community has been clear and consistent on this point.
Cross-ecosystem. Real infrastructure dependency graphs cross tool boundaries. A Terraform module change affects Docker builds which affect CI pipelines which affect Helm deployments. Single-ecosystem tools miss exactly the connections where surprise breakage happens.
Consumer-side visibility. Most existing tools show the producer side: what modules exist, what images are published, what templates are available. The question platform teams actually need answered is the consumer side: who is pulling this artifact, at which version, right now?
Org-level installation. At a hundred or more repos, anything that requires per-repo setup is a non-starter. The installation model should be a single read-only access token at the org level, similar to how security scanners deploy.
Freshness without manual effort. The graph must stay current through scheduled or event-driven rescans. Staleness is the number one reason DIY solutions get abandoned. Any productised solution that doesn’t solve freshness will follow the same path.
Blast radius before you push. The highest-value query is prospective, not retrospective: before I publish a breaking change, show me every repo and team that will be affected. This requires the full transitive graph, not just direct dependencies.
Where I think this is heading
The dependency visibility gap isn’t going to close on its own. If anything, it’s widening. Organisations are adopting more infrastructure tools, not fewer. The shift toward platform engineering means more shared components consumed across more repos. AI-assisted development is accelerating code output without any corresponding improvement in dependency awareness.
I expect we’ll see this category emerge properly over the next couple of years. The building blocks exist: Git platform APIs for repo enumeration, well-understood file formats for parsing, graph databases and recursive queries for traversal, and a clear community demand validated by the fact that people keep building their own versions.
Some existing tools may expand into this space. Backstage’s plugin ecosystem could support auto-discovery. Renovate’s deep parser library could be extended for visibility queries. Security platforms like Wiz, which already have the org-level deployment model, could add infrastructure dependency graphs alongside their vulnerability scanning.
What I think is more likely is that purpose-built tools will emerge. Tools designed specifically for cross-repo infrastructure dependency visibility, with auto-discovery, cross-ecosystem support, and blast radius analysis as core capabilities rather than bolted-on features.
That’s what I’m building with Riftmap. It scans a GitLab or GitHub org, auto-discovers cross-repo dependencies across Terraform, Docker, CI pipelines, Python, Go, npm, Ansible, Helm, Kubernetes, and Kustomize, and presents the graph with interactive blast radius analysis. Org-level install, no per-repo config, no YAML to maintain.
It’s currently in early access, and it’s far from finished. But the foundation is live: ten ecosystem parsers, a resolver that matches consumers to producers across the org, incremental scanning to keep the graph fresh, and a visual graph with impact mode that shows exactly which repos are affected by a given change.
If you’re working in this space, whether as an engineer dealing with the problem, someone building tooling, or someone evaluating solutions, I’d welcome the conversation. The more perspectives I hear, the better the tooling gets for everyone.
You can see more at riftmap.dev, or reach me at [email protected].
If you’re interested in following along as Riftmap develops, sign up for early access at riftmap.dev.