Skip to content

Understand-Anything: a knowledge graph for your codebase that hit 5K stars in a week

· 8 min read

AI coding agents spend most of their tokens just figuring out where things are. Before Claude can fix a bug in your auth middleware, it needs to read your directory tree, open a handful of files to understand the module structure, trace imports across packages, and build a mental model of how everything connects. By the time it starts writing code, a significant chunk of your context window is gone — consumed by exploration, not execution.

Understand-Anything is a Claude Code plugin that tries to front-load that exploration. It scans your codebase once using a multi-agent pipeline, builds a JSON knowledge graph of every module, function, and dependency relationship, then serves an interactive React dashboard where you can visually explore the architecture. Eight days after launch, it has over 5,000 GitHub stars.

The name is misleading. This isn’t a multimodal AI system that understands images and documents. It’s a codebase-specific tool — a structured way to give both humans and AI agents a map of a repository before they start navigating it blind.

What the pipeline actually does

When you run /understand in Claude Code, the plugin dispatches five specialized agents in sequence:

  1. Project Scanner reads your directory tree, identifies the tech stack, and creates an initial inventory of files worth analyzing. It respects .gitignore and has configurable exclusion patterns for node_modules, build artifacts, and the like.

  2. File Analyzer is the expensive step. It runs in parallel (up to three concurrent analyses) and uses Claude to read each significant file, extract its exports, imports, dependencies, and purpose, then generate a plain-English summary. The output for each file includes structured metadata — what the file does, what it depends on, and what depends on it.

  3. Architecture Analyzer takes the individual file analyses and synthesizes them into a system-level view. It identifies architectural layers (API routes, business logic, data access, shared utilities), maps the major data flows, and flags patterns like circular dependencies or tightly coupled modules.

  4. Tour Builder generates guided walkthroughs of the codebase — narrative paths through the architecture aimed at different personas. A new developer gets an onboarding tour. A PM gets a capabilities overview. An AI agent gets a structural summary optimized for downstream code generation.

  5. Graph Reviewer validates the final knowledge graph for consistency. It checks that all referenced nodes exist, that edge relationships are bidirectional where they should be, and that the graph is connected (no orphaned subgraphs).

The output is a knowledge-graph.json file containing nodes (files, modules, functions) and edges (imports, calls, extends, implements) with metadata attached to each. This file becomes the data source for the dashboard and for subsequent /understand-chat queries.

The dashboard

Running /understand-dashboard launches a local React 19 application that renders the knowledge graph as an interactive node-edge visualization using React Flow and Dagre for automatic layout.

The dashboard has a few features that go beyond a basic graph viewer:

Persona-adaptive views. You can switch between “junior developer,” “project manager,” and “AI-assisted developer” perspectives. Each filters and annotates the graph differently — the PM view emphasizes feature boundaries and integration points, the junior dev view highlights entry points and core abstractions, the AI view surfaces dependency chains and modification impact zones.

Guided tours. The tours generated by the Tour Builder agent render as step-by-step walkthroughs that highlight nodes in sequence, with narrative explanations between each step. Think of it as a senior engineer walking you through the codebase, except the senior engineer is an LLM that read every file.

Fuzzy search. Powered by Fuse.js, the search bar lets you find nodes by name, description, or tag with typo tolerance. Selecting a result centers and highlights the node in the graph.

Chat integration. The /understand-chat command lets you ask natural language questions about the codebase, with the knowledge graph as context. Instead of Claude reading files to answer “how does authentication work here?”, it queries the pre-built graph — which is dramatically cheaper in tokens.

The six slash commands

The plugin exposes its functionality through Claude Code’s skill system:

CommandWhat it does
/understandRun the full multi-agent analysis pipeline
/understand-dashboardLaunch the interactive graph viewer
/understand-chatAsk questions about the codebase using the knowledge graph as context
/understand-diffAnalyze how recent changes affect the architecture
/understand-explainGet a plain-English explanation of a specific file or module
/understand-onboardGenerate an onboarding guide for new contributors

The /understand-diff command is particularly interesting for ongoing use. Rather than re-running the full pipeline after every change, it takes a git diff and maps the changed files onto the existing graph, highlighting which nodes were affected and what downstream dependencies might need attention.

The rough edges

The community feedback after the first week has been honest, and several issues are significant enough to mention.

Token consumption is the biggest concern. The File Analyzer agent calls Claude for every significant file in your codebase. For a medium-sized project, that’s dozens or hundreds of LLM calls. Multiple users on the Max plan reported that a single /understand run exhausted their entire session token quota without producing visible output. One user on an Opus Pro plan described it as “completely exhausting my session tokens without showing anything for it.” This is the fundamental cost of LLM-powered analysis versus static analysis — you get richer semantic understanding, but you pay for it.

The dashboard freezes on large codebases. Users with roughly 3,000 nodes and 5,000 edges reported browser freezes. The Dagre layout algorithm runs on the main thread, and computing positions for thousands of nodes blocks rendering. PRs for Web Worker–based layout computation are open but not yet merged.

Graph layout can be unhelpful. Several users reported nodes clustering in a single horizontal line rather than spreading into a readable hierarchy. The automatic layout works well for small-to-medium graphs but breaks down at scale.

Security consideration. The knowledge-graph.json file is served directly by the dashboard’s local server. If you’re running this on a shared machine or exposing the port, the graph — which contains file paths and architectural descriptions of your codebase — is accessible. A GitHub issue flagged this, and it’s worth being aware of.

No test suite. The project ships without automated tests. As one user pointed out, “a plugin used to analyze codebases should itself have automated tests running on every PR.” The author has been transparent that the project was “vibe coded in a day” for personal use and went viral unexpectedly. That context is important — this is early-stage open source, not a polished product.

The broader landscape

Understand-Anything isn’t the only tool attacking this problem. In the same two-week window, several projects launched with overlapping goals:

code-review-graph takes the opposite approach — pure static analysis via tree-sitter, no LLM calls during indexing. It stores the graph in SQLite and supports incremental updates via git diffs. The author claims 6.8x to 49x token reduction in downstream agent usage. The tradeoff: no semantic summaries, no plain-English explanations. You get structure without understanding.

CodeGraph supports 16+ languages via tree-sitter, runs entirely locally, and exposes the graph through an MCP server so any compatible agent can query it. It focuses on breadth of language support over depth of analysis.

Axon runs a 12-phase indexing pipeline using KuzuDB (a graph database) and includes impact analysis with confidence scores. It supports live re-indexing with a --watch flag, making it viable for continuous use during development.

The common thread across all of these: AI coding agents waste enormous amounts of tokens scanning files to understand codebase structure. Pre-indexing that structure into a queryable graph — whether via LLM analysis or static parsing — lets agents query architecture rather than rediscover it every session.

The key design tradeoff is between richness and cost. LLM-powered pipelines like Understand-Anything and Axon produce human-readable summaries and semantic relationships, but they’re expensive to build and maintain. Static analysis tools like code-review-graph and CodeGraph are fast and cheap to index but produce structural data without natural-language context.

Who should try it

Understand-Anything is best suited for three scenarios:

Onboarding new team members. The guided tours and persona-adaptive views are genuinely useful for someone encountering a codebase for the first time. Running /understand-onboard and handing the dashboard URL to a new hire is a better introduction than “read the README and ask questions.”

Non-technical stakeholders who need to understand systems. The PM persona view and plain-English explanations lower the barrier to understanding technical architecture. A product manager who needs to understand which services are affected by a proposed feature can get that answer from the dashboard without reading code.

Inheriting unfamiliar codebases. If you’ve just been handed ownership of a service you didn’t build, the combination of /understand and /understand-chat gives you a structured way to explore rather than grep-and-hope.

For large monorepos, wait. The performance issues are real, and the token cost at scale is prohibitive. For codebases under a few hundred files, though, the experience is already useful — rough edges and all. The interactive dashboard in particular offers something that none of the static-analysis alternatives provide: a visual, explorable representation of your system that a non-engineer can actually navigate.

The project is eight days old. The author is responsive, PRs are flowing in, and the fundamental idea — give agents (and humans) a pre-built map instead of making them explore from scratch every time — is clearly resonating. Whether Understand-Anything specifically matures into a reliable tool or gets overtaken by one of its competitors, the category it represents is here to stay.

If you enjoyed this, you might also like

👤

Written by

Daniel Dewhurst

Lead AI Solutions Engineer building with AI, Laravel, TypeScript, and the craft of software.

Comments