<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Agents on Yaklab</title><link>https://www.yaklab.org/tags/ai-agents/</link><description>Recent content in Ai-Agents on Yaklab</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><copyright>© 2026 James Ainslie</copyright><lastBuildDate>Sun, 01 Feb 2026 11:52:25 -0500</lastBuildDate><atom:link href="https://www.yaklab.org/tags/ai-agents/index.xml" rel="self" type="application/rss+xml"/><item><title>Golden Test Methodology (With a Twist)</title><link>https://www.yaklab.org/posts/golden-test-methodology/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate><guid>https://www.yaklab.org/posts/golden-test-methodology/</guid><description>&lt;p&gt;This document describes how we built comprehensive golden test coverage for gomdlint&amp;rsquo;s 55 lint rules — and how we used AI agents to do most of the work.&lt;/p&gt;
&lt;p&gt;Golden testing is a well-known technique: you capture the output of your program, commit it, and fail the build if the output ever changes unexpectedly. What makes this project interesting is the scale and the process. We needed ~170 carefully constructed markdown files, each designed to trigger exactly one rule while avoiding false positives from the other 54. Writing those files requires detailed knowledge of every rule&amp;rsquo;s behavior, its edge cases, and the dozen or so ways you can accidentally create a bad test input. That&amp;rsquo;s a lot of domain knowledge to hold in your head across 55 rules.&lt;/p&gt;</description></item></channel></rss>