<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Christopher Bunk]]></title><description><![CDATA[Practical insights on AI-first product development, technology strategy, and executive leadership]]></description><link>https://writings.chrisbunk.com</link><generator>Substack</generator><lastBuildDate>Thu, 14 May 2026 09:05:16 GMT</lastBuildDate><atom:link href="https://writings.chrisbunk.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Christopher Bunk]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[christopherbunk438099@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[christopherbunk438099@substack.com]]></itunes:email><itunes:name><![CDATA[Christopher Bunk]]></itunes:name></itunes:owner><itunes:author><![CDATA[Christopher Bunk]]></itunes:author><googleplay:owner><![CDATA[christopherbunk438099@substack.com]]></googleplay:owner><googleplay:email><![CDATA[christopherbunk438099@substack.com]]></googleplay:email><googleplay:author><![CDATA[Christopher Bunk]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Context is King]]></title><description><![CDATA[What you bring to your agent is the foundation of agentic impact]]></description><link>https://writings.chrisbunk.com/p/context-is-king</link><guid isPermaLink="false">https://writings.chrisbunk.com/p/context-is-king</guid><dc:creator><![CDATA[Christopher Bunk]]></dc:creator><pubDate>Sat, 27 Dec 2025 18:52:20 GMT</pubDate><content:encoded><![CDATA[<p>There&#8217;s a hierarchy in AI-assisted work that most practitioners get backwards.</p><p>They obsess over model selection. They debate Claude vs GPT vs Gemini. They study orchestration frameworks and agent architectures. They optimize token costs and latency.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://writings.chrisbunk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>None of this matters if you get context wrong.</p><p>Context&#8212;the information you pass to the model&#8212;is the single most important factor determining whether an agentic task succeeds or devolves into a hallucination-filled mess. Model choice matters. Orchestration plays its role. But without the right context on a complicated query, you&#8217;re navigating with a broken compass.</p><p>I&#8217;ve spent months building agentic workflows for development and research. The pattern is unmistakable: invest in context engineering, and everything else falls into place.</p><div><hr></div><h2>The Prompt Era is Over</h2><p>Remember when ChatGPT hit the scene? Everyone was collecting prompts like rare baseball cards. &#8220;This persona pattern unlocks 10x productivity.&#8221; &#8220;This Chain of Thought template changes everything.&#8221; We saved them in Notion, shared them on Twitter, treated them like secret weapons.</p><p>That era is over.</p><p>Here&#8217;s what I&#8217;ve learned on complicated tasks: just ask the LLM to write the prompt for you. Describe what you&#8217;re trying to accomplish, and the model will generate a better prompt than any you&#8217;d craft by hand.</p><p>Research backs this up. <a href="https://arxiv.org/pdf/2211.01910">Zhou et al.&#8217;s Automatic Prompt Engineer</a> (APE) tested automated prompt generation against human-authored prompts across the BIG-Bench suite. APE outperformed humans in 19 out of 24 tasks. The models are better at prompting themselves than we are at prompting them.</p><p>This flips the mental model. Instead of memorizing prompt patterns, focus on clearly articulating what you want. The model handles the translation.</p><p>One interesting pattern I&#8217;ve seen in practice: Ticket-Driven Development (TkDD) approaches where prompts are auto-generated at ticket creation time and stored as part of the ticket metadata. The ticket captures intent; the system generates the optimal execution prompt. Context engineering at the point of capture.</p><div><hr></div><h2>The Automated Memory Problem</h2><p>Most consumer LLM clients now ship with automated memory. Claude, ChatGPT, Gemini&#8212;they all try to remember things about you across conversations. It happens behind the scenes, and for casual use, it works fine.</p><p>For serious agentic work, automated memory is dangerous.</p><p>The fundamental issue is incompleteness and lack of transparency. These systems decide <em>what</em> to remember based on heuristics you can&#8217;t see or correct. OpenAI&#8217;s forums are full of users reporting duplicated memories, ignored context, and the system aggressively storing irrelevant details while missing important ones.</p><p>I experienced this firsthand. I was demonstrating LLM capabilities to a friend in finance and used an example about someone making $50,000 annually. That number got stored. For weeks afterward, financial questions were answered with assumptions about my $50k salary. Every investment recommendation, every budgeting suggestion, every career question&#8212;all biased by a throwaway example I&#8217;d forgotten about.</p><p>The system remembered the wrong thing, and I had no visibility into the error until its effects became obvious.</p><p>This is the core tension: automated memory optimizes for user-friendliness by hiding complexity. But for agentic work, you need control. You need to know exactly what context is being injected. You need the ability to curate, correct, and evolve your knowledge deliberately.</p><div><hr></div><h2>Deliberate Long-Term Memory</h2><p>The alternative is deliberate knowledge construction. Instead of hoping the system remembers the right things, you build structured knowledge stores that you control.</p><p>The tactical toolbox here is broader than most people realize.</p><p><strong>Markdown as universal substrate.</strong> Markdown files are both human-readable and agent-readable. Any file system becomes a knowledge base. You don&#8217;t need a specialized database&#8212;you need a folder of well-organized <code>.md</code> files.</p><p>Tools like Obsidian store data as markdown while giving you powerful tooling for discovery, search, linking, and extension through hundreds of community plugins. Store the vault in git, and you gain change history, the ability to revert, and visibility into how your knowledge evolves.</p><p>The key insight: markdown future-proofs your investment. If a better tool emerges, your knowledge migrates trivially. No lock-in, no export nightmares.</p><p><strong>Establish an ontology.</strong> An ontology is a formal structure for modeling knowledge&#8212;defining concepts and their relationships within a domain. Think about how Netflix understands the relationship between &#8220;genre&#8221; and &#8220;actor&#8221; to power recommendations. They&#8217;re not just storing data; they&#8217;ve built a conceptual framework that makes data meaningful.</p><p>For personal knowledge management, this means folders for specific content types&#8212;Goals, Projects, Resources, People&#8212;with metadata linking them together. A goal links to the projects pursuing it. A project links to the resources it uses and the people involved. A person links to the conversations you&#8217;ve had with them.</p><p>This isn&#8217;t bureaucracy. It&#8217;s leverage. As new information enters your system, the ontology ensures it connects to existing knowledge automatically. Every new note enriches the whole.</p><div><hr></div><h2>From Retrieval to Graph RAG</h2><p>I wrote about <a href="https://writings.chrisbunk.com/p/graph-rag">Graph RAG</a> in detail, but it&#8217;s worth revisiting in this context.</p><p>Traditional RAG treats your knowledge as a bag of disconnected paragraphs. You embed chunks, find the most &#8220;similar&#8221; ones to a query, and hope coherence emerges.</p><p>It works. Sort of. Until it doesn&#8217;t.</p><p>The failure mode appears when you need context that isn&#8217;t semantically similar to your query but is structurally essential. Ask &#8220;What&#8217;s the authentication approach for the payment service?&#8221; and vector search finds chunks mentioning &#8220;authentication&#8221; and &#8220;payment.&#8221; But it misses the shared auth library that all services inherit from. It misses the decision doc explaining <em>why</em> you chose OAuth. It misses the three other services using the same pattern, which would tell you this is established convention rather than a one-off.</p><p>These aren&#8217;t similar in embedding space. They&#8217;re <em>related</em> in structure.</p><p>Graph RAG combines vector search for semantic similarity with graph traversal for structural relationships. When you query, the system retrieves not just similar content but connected context&#8212;decisions that led here, patterns this follows, components this touches.</p><p>Databricks&#8217; research on long-context RAG performance found that across 2,000+ experiments on 13 LLMs, retrieval quality&#8212;not model capability&#8212;was the determining factor in RAG system effectiveness. More relevant context beats larger context windows.</p><div><hr></div><h2>The Context Engineering Discipline</h2><p>Google&#8217;s Agent Development Kit team recently published their framework for production-grade context management. Their core thesis: &#8220;Context is a compiled view over a richer stateful system.&#8221;</p><p>This is the mental shift required. Stop thinking about context as &#8220;stuff I paste into the prompt.&#8221; Start thinking about it as a compiled artifact&#8212;transformed, filtered, and optimized from underlying knowledge stores.</p><p>The ADK framework separates:</p><ul><li><p><strong>Sessions, memory, and artifacts</strong> as sources&#8212;the full, structured state</p></li><li><p><strong>Flows and processors</strong> as the compiler pipeline&#8212;transformations that shape context</p></li><li><p><strong>Working context</strong> as the compiled view shipped to the LLM for a single invocation</p></li></ul><p>Once you adopt this model, context engineering stops being &#8220;prompt gymnastics&#8221; and starts looking like systems engineering. You ask systems questions: What&#8217;s the intermediate representation? Where do we apply compaction? How do we make transformations observable?</p><p>The Manus agent team shares similar learnings. Their KV-cache hit rate&#8212;essentially measuring how much context can be reused across agent steps&#8212;is their single most important production metric. It directly affects both latency and cost. With Claude Sonnet, cached input tokens cost $0.30/MTok versus $3/MTok uncached. A 10x difference.</p><p>Context stability matters. Context structure matters. Context management is infrastructure.</p><div><hr></div><h2>Practical Tactics</h2><p>Beyond the architecture, here are specific tactics I&#8217;ve found valuable:</p><p><strong>Keep context window utilization intentional.</strong> Research on context window optimization shows that there&#8217;s an optimal balance between &#8220;enough context&#8221; and &#8220;too much noise.&#8221; Stuffing everything in degrades performance. Models perform better with fewer, more relevant documents than large volumes of unfiltered data.</p><p><strong>Use the LLM to generate context for the LLM.</strong> Before complex tasks, ask the model what information it would need to do the task well. Let it tell you what&#8217;s missing. Then provide it.</p><p><strong>Layer context by scope.</strong> Some context applies to all tasks (who you are, what you&#8217;re working on, your preferences). Some applies to a session (the specific project, the current goal). Some applies only to a single query. Structure your knowledge stores to match these scopes.</p><p><strong>Make context visible.</strong> Whatever system you build, ensure you can inspect what context is being passed to any given query. Debug your context the way you&#8217;d debug code.</p><p><strong>Invest in capture habits.</strong> The best context engineering means nothing if you don&#8217;t capture knowledge as it emerges. Daily notes, inbox processing, regular reviews&#8212;the discipline of capture compounds into the value of retrieval.</p><div><hr></div><h2>The Path Forward</h2><p><a href="https://www.prnewswire.com/in/news-releases/cognizant-to-deploy-1-000-context-engineers-powered-by-contextfabric-to-industrialize-agentic-ai-302541593.html">Cognizant recently announced they&#8217;re deploying 1,000 &#8220;context engineers&#8221;</a> powered by a new platform called ContextFabric. Whether that specific initiative succeeds or not, the signal is clear: enterprises are recognizing that context management is the bottleneck to agentic AI value.</p><p>MIT Technology Review&#8217;s 2025 retrospective characterized the year as a shift from &#8220;vibe coding&#8221; to &#8220;context engineering&#8221;&#8212;moving from a loose, intuition-based approach to systematic management of how AI systems process context.</p><p>The models will keep getting better. The orchestration frameworks will mature. But the fundamental constraint remains: agents can only work with what you give them.</p><p>Context is king. Everything else is optimization at the margins.</p><div><hr></div><p><em>I&#8217;m building an AI-first development workflow that combines dual track agile, TkDD (Ticket-Driven Development), and structured context engineering. More on the practical implementation in future posts.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://writings.chrisbunk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Graph RAG]]></title><description><![CDATA[Why Structure Beats Similarity]]></description><link>https://writings.chrisbunk.com/p/graph-rag</link><guid isPermaLink="false">https://writings.chrisbunk.com/p/graph-rag</guid><dc:creator><![CDATA[Christopher Bunk]]></dc:creator><pubDate>Fri, 19 Dec 2025 17:17:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!AbpU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Traditional RAG (Retrieval-Augmented Generation) has a dirty secret: it treats your knowledge like a bag of disconnected paragraphs. You embed chunks, find the most &#8220;similar&#8221; ones, and hope the AI can stitch together something coherent.</p><p>It works. Sort of. Until it doesn&#8217;t.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://writings.chrisbunk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The problem shows up when you need context that isn&#8217;t semantically similar to your query&#8212;but is structurally essential to answering it.</p><p><strong>The Limits of Vector Search</strong></p><p><a href="https://x.com/_avichawla/status/1989944406129193225">Avi Chawla&#8217;s visual explanation</a> captures the problem perfectly:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AbpU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AbpU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 424w, https://substackcdn.com/image/fetch/$s_!AbpU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 848w, https://substackcdn.com/image/fetch/$s_!AbpU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 1272w, https://substackcdn.com/image/fetch/$s_!AbpU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AbpU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png" width="1158" height="1054" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1054,&quot;width&quot;:1158,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:503496,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://writings.chrisbunk.com/i/182103442?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AbpU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 424w, https://substackcdn.com/image/fetch/$s_!AbpU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 848w, https://substackcdn.com/image/fetch/$s_!AbpU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 1272w, https://substackcdn.com/image/fetch/$s_!AbpU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa38cd333-9ad3-4c60-a11e-b1ae9c378aa9_1158x1054.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Imagine you want to summarize a biography where each chapter covers a different accomplishment of a person. With naive RAG, you retrieve the top-k most similar chunks to your query. But summarization needs the <em>full context</em>&#8212;all the accomplishments, not just the ones that happen to match your embedding.</p><p>The chunks about &#8220;won Nobel Prize&#8221; and &#8220;founded company&#8221; and &#8220;wrote bestseller&#8221; aren&#8217;t semantically similar to each other. They&#8217;re about completely different topics. Vector search won&#8217;t naturally connect them.</p><p>But they&#8217;re all one hop away from the same person node in a graph.</p><p>This is the core insight: <strong>semantic similarity and structural relevance are different things</strong>. Vector search optimizes for the former. Many tasks require the latter.</p><p>Ask a standard RAG system: &#8220;What&#8217;s the authentication approach for the payment service?&#8221;</p><p>It dutifully finds chunks mentioning &#8220;authentication&#8221; and &#8220;payment.&#8221; But it misses:</p><ul><li><p>The shared auth library that all services inherit from</p></li><li><p>The decision doc explaining <em>why</em> you chose OAuth over API keys</p></li><li><p>The architectural diagram showing the auth service&#8217;s relationship to payments</p></li><li><p>The three other services that use the same pattern (which would tell you this is established convention, not a one-off)</p></li></ul><p>These aren&#8217;t similar in embedding space. They&#8217;re <em>related</em> in structure. And that distinction matters enormously.</p><p><strong>Enter Graph RAG</strong></p><p>Graph RAG combines two retrieval mechanisms:</p><ol><li><p><strong>Vector search</strong> for semantic similarity (what sounds related)</p></li><li><p><strong>Graph traversal</strong> for structural relationships (what actually connects)</p></li></ol><p>The core idea, as Chawla illustrates:</p><ul><li><p>Create a graph (entities &amp; relationships) from your documents</p></li><li><p>During retrieval, traverse the graph to fetch connected context</p></li><li><p>Pass the structured context to the LLM</p></li></ul><p>For the biography example: the system creates a subgraph where the person is a central node, and each accomplishment is one hop away. When you ask for a summary, graph traversal fetches <em>all</em> accomplishments&#8212;not just the semantically similar ones. The structure captures what vector search can&#8217;t.</p><p>The graph captures relationships that vectors can&#8217;t: &#8220;depends on,&#8221; &#8220;implements,&#8221; &#8220;decided by,&#8221; &#8220;supersedes,&#8221; &#8220;owned by.&#8221; When you query, the system retrieves not just similar content but connected context&#8212;the decisions that led here, the patterns this follows, the components this touches.</p><p>This is closer to how human experts think. When a senior engineer answers your authentication question, they&#8217;re not doing semantic search in their head. They&#8217;re traversing a mental graph: &#8220;Payment service... that uses our standard auth pattern... which we chose because of the X decision... and it&#8217;s similar to how we did it in the Y service...&#8221;</p><p><strong>Why This Changes Code Understanding</strong></p><p>Codebases are graphs. Files import other files. Functions call functions. Classes inherit from classes. Decisions cascade through architectures. Developers learn from past patterns.</p><p>Vector search flattens all of this into a soup of embeddings. Graph RAG preserves the structure.</p><p>Consider what you can answer with proper graph traversal:</p><ul><li><p>&#8220;What would break if I changed this function?&#8221; &#8594; Follow the dependency graph</p></li><li><p>&#8220;Why did we build it this way?&#8221; &#8594; Traverse to the decision docs and discussions</p></li><li><p>&#8220;Where else do we use this pattern?&#8221; &#8594; Find structurally similar implementations</p></li><li><p>&#8220;Who knows about this area?&#8221; &#8594; Track ownership and contribution relationships</p></li></ul><p>None of these are semantic similarity questions. They&#8217;re structural queries that require understanding relationships.</p><p><strong>Claude-MPM: Graph RAG in Practice</strong></p><p>I&#8217;ve been using <a href="https://github.com/bobmatnyc/claude-mpm/">claude-mpm</a> for agentic development, and it implements exactly this pattern. The architecture combines:</p><p><strong>Kuzu</strong> &#8212; A graph database that stores project-specific knowledge graphs. Not just facts, but relationships: which files relate to which decisions, which patterns connect to which implementations, which architectural choices cascade to which components. The graph persists across sessions, so agents build cumulative understanding of your codebase.</p><p><strong>MCP Vector Search</strong> &#8212; Semantic search over your code using embeddings. Find code by intent (&#8221;authentication logic&#8221;) not just keywords. This handles the similarity dimension.</p><p><strong>The combination</strong> &#8212; When an agent needs context, it&#8217;s not just grabbing the most similar chunks. It&#8217;s traversing relationships: &#8220;This file implements that pattern, which was decided in this doc, and relates to these three other services.&#8221; The retrieval is structurally aware.</p><p>The result: agents that actually understand your codebase&#8217;s architecture rather than just pattern-matching against text.</p><p><strong>The Context Quality Problem</strong></p><p>Here&#8217;s the insight that makes Graph RAG essential for AI-assisted development: <strong>context quality determines output quality</strong>.</p><p>Feed an AI the wrong context and it confidently produces wrong answers. Feed it incomplete context and it hallucinates the gaps. The difference between helpful AI and frustrating AI often isn&#8217;t model capability&#8212;it&#8217;s retrieval quality.</p><p>Graph RAG attacks this directly:</p><ul><li><p><strong>Relevant context</strong> &#8212; Graph relationships filter for what actually matters, not just what sounds similar</p></li><li><p><strong>Complete context</strong> &#8212; Traversal brings in structurally connected information that vector search would miss</p></li><li><p><strong>Prioritized context</strong> &#8212; Relationship types help rank what&#8217;s most important (direct dependency &gt; distant reference)</p></li></ul><p>When your agents have better context, they produce better code. It&#8217;s that simple. And that hard to achieve with vectors alone.</p><p><strong>Building Your Graph</strong></p><p>The graph doesn&#8217;t build itself. You need to capture relationships as they form:</p><p><strong>Structural relationships</strong> &#8212; Parse imports, dependencies, inheritance, API contracts. These are deterministic and should be automated.</p><p><strong>Decision relationships</strong> &#8212; Link implementations to the decisions that shaped them. This requires discipline (or tooling like TkDD that captures decisions in tickets and connects them to code).</p><p><strong>Pattern relationships</strong> &#8212; Identify similar implementations and connect them. Partly automated through code analysis, partly human curation.</p><p><strong>Ownership relationships</strong> &#8212; Track who built what, who reviewed what, who owns what areas. Inferred from git history and organizational structure.</p><p>The initial investment pays compounding returns. Every relationship you capture makes future retrieval more precise.</p><p><strong>The Ontology Layer</strong></p><p>Graph RAG gets even more powerful when you add an ontology&#8212;a schema that defines what types of nodes and relationships exist in your domain.</p><p>For a codebase, your ontology might include:</p><ul><li><p><strong>Node types</strong>: Service, Function, Decision, Pattern, Team, Document</p></li><li><p><strong>Relationship types</strong>: implements, depends_on, decided_by, owned_by, similar_to</p></li></ul><p>The ontology does two things:</p><ol><li><p><strong>Constrains the graph</strong> &#8212; You can&#8217;t have nonsense relationships like &#8220;Function implements Team&#8221;</p></li><li><p><strong>Enables typed queries</strong> &#8212; &#8220;Find all services that depend on auth AND were decided by an architectural decision record&#8221;</p></li></ol><p>This is the difference between a pile of connected nodes and a structured knowledge system.</p><p><strong>Where This Is Going</strong></p><p>The future of AI-assisted development isn&#8217;t better models&#8212;it&#8217;s better context. Models are already capable enough. What limits them is our ability to provide the right information at the right time.</p><p>Graph RAG is part of that answer. Vector search alone isn&#8217;t sufficient for understanding structured domains like code. You need both similarity and structure.</p><p>The tools are maturing. Kuzu, Neo4j, and other graph databases are getting easier to integrate. Vector databases are becoming commoditized. The combination&#8212;Graph RAG&#8212;is becoming a recognizable pattern.</p><p>If you&#8217;re building AI-assisted development tools or workflows, this is worth understanding. The teams that get context right will ship circles around teams that don&#8217;t.</p><div><hr></div><p><em>I&#8217;m building an AI-first development workflow that combines dual track agile, TkDD (Ticket-Driven Development), and tools like claude-mpm. More on that in future posts.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://writings.chrisbunk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Hello, World]]></title><description><![CDATA[Building in the age of AI]]></description><link>https://writings.chrisbunk.com/p/hello-world</link><guid isPermaLink="false">https://writings.chrisbunk.com/p/hello-world</guid><dc:creator><![CDATA[Christopher Bunk]]></dc:creator><pubDate>Mon, 15 Dec 2025 15:09:44 GMT</pubDate><content:encoded><![CDATA[<p>There&#8217;s a certain irony in a technology executive writing &#8220;Hello, World&#8221; as a first post. It&#8217;s been 25 years since I wrote that phrase in C++, nervous and excited, having no idea where it would lead.</p><p>It led here: to a career building products and leading engineering teams, most recently as a CPTO-level executive where I sit at the intersection of product, technology, and business strategy. And now, to this newsletter.</p><h2>Why I&#8217;m writing</h2><p><strong>First, I need to think out loud.</strong> The best ideas I&#8217;ve ever had didn&#8217;t come from reading or meetings&#8212;they came from writing. There&#8217;s something about putting thoughts into words that forces clarity. Writing is how I figure out what I actually believe versus what I&#8217;m just repeating.</p><p><strong>Second, we&#8217;re living through a transformation.</strong> The rise of AI isn&#8217;t just changing what we build&#8212;it&#8217;s changing how we build, how we lead teams, and what &#8220;product&#8221; even means. I&#8217;ve spent the past two years going deep on AI-first product development, and I want to share what I&#8217;m learning while it&#8217;s still fresh. Some of it will be wrong. That&#8217;s fine. Working in public means correcting in public too.</p><p><strong>Third, I want to connect with others navigating the same questions.</strong> How do you lead a team when AI can do half of what junior engineers used to do? How do you set product strategy when the underlying technology shifts quarterly? How do you stay relevant&#8212;and help your teams stay relevant&#8212;in a world that&#8217;s changing this fast?</p><p>I don&#8217;t have all the answers. But I have some hard-won lessons, a few frameworks that have served me well, and a genuine curiosity about what comes next.</p><h2>What to expect</h2><p>I&#8217;ll be writing weekly about:</p><ul><li><p><strong>AI-first product development</strong> &#8212; Not the hype. The practical reality of building products in the age of LLMs.</p></li><li><p><strong>Technology leadership</strong> &#8212; Managing teams, making decisions, navigating organizations.</p></li><li><p><strong>The craft of building</strong> &#8212; Systems thinking, technical strategy, and the messy work of shipping.</p></li></ul><p>My goal is to write things I wish I could have read five years ago. Essays that are useful, concrete, and honest about the tradeoffs.</p><h2>Who this is for</h2><p>If you&#8217;re a product manager, engineer, designer, or technology leader trying to figure out how AI changes your work&#8212;this is for you.</p><p>If you&#8217;re an executive thinking about how to transform your organization&#8212;this is for you.</p><p>If you&#8217;re someone who loves building things and wants to get better at it&#8212;this is for you.</p><p>Really, I&#8217;m writing this for myself. But if you&#8217;re someone who thinks about these problems too, I&#8217;d love to have you along.</p><p>Let&#8217;s figure this out together.</p><p>&#8212; Chris</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://writings.chrisbunk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://writings.chrisbunk.com/subscribe?"><span>Subscribe now</span></a></p><h2></h2><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://writings.chrisbunk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Christopher Bunk! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>