What’s the Moat in AI When Features Ship Overnight?
Launching a product in the AI era requires a shift from massive context injection to precise context management. Liris AI automates coding workflows by utilizing a project knowledge graph to isolate relevant codebase slices, ensuring high-fidelity feature development and debugging.
We have been developing Liris AI for the past two years. As a desktop tool designed to automate coding workflows, it allows users to describe desired features which the system then develops. The architecture can optionally take control of the machine to execute these tasks directly. The fundamental objective remains the optimization of context sent to the language model.
1. The Problem of Scale in Vibe Coding
In large-scale projects containing hundreds of files, the majority of the codebase is irrelevant to any specific task. Efficiency depends on the ability to work with a precise slice of the repository, avoiding the failure points of traditional vibe coding.
Isolating specific execution paths and state changes within existing code to identify and resolve regressions quickly.
Generating new functional blocks by understanding only the necessary dependencies and architectural patterns required for the task.
2. Technical Core: The Project Knowledge Graph
Liris AI constructs a project knowledge graph from the repository structure. It treats files as a taxonomy, functions and classes as nodes, and dependencies or call chains as edges. This ensures the model works with the right slice of the codebase rather than the entire repo.
Direct connection to tools like Codex, Claude Code, Cursor, and VS Code ensures a seamless workflow across different environments.
Enables the retrieval of code directly from snippets, maintaining a unified context for the model regardless of the source.
3. Market Validation and Differentiation
The recent emergence of GitNexus, an open-source project with a similar approach, provides strong validation for this direction. In a landscape where features can be replicated quickly, the focus shifts toward depth of architecture and reliability of automated reasoning.
