March 27, 2026
My Agent Adoption Journey
Being somewhat of an AI-sceptic last year, at least when it came to agentic coding, 2026 brought major changes for me. Below I try to share my learnings of using AI coding agents so far.
AGENT.MD
Being heavily inspired by Peter Steinbergers writings I started by copying his AGENTS.MD file to my machine and refactored down to what I initially thought I needed.
I still find it hard to judge what to include in my global AGENTS.MD file. Currently I find myself stripping out more and more contents of it without recognizing notable changes / issues.
CLI beats MCP / Tool Definitions
Instead of bloating my agent setup with tons of tool descriptions or mcp servers, I rather tell the agent which CLI to use for which purpose. The agent can help itself by running a —help command.
E.g. for pull request comments my agent nows that it can just fetch it via the pr comments command from my own CLI helper collection, since AGENT.md gives a hint:
- for everything related to pull requests use the pr command; run pr —help to get more infoThis might of course add a little bit more token usage, but with current pricing I rather take this compared to maintaining tool / mcp setups.
I can just extend my CLIs and the agent automatically learns how to use them via —help.
Should you go Spec Driven?
Beginning of 2026 I received a hint to try the Conductor Extension of the Gemini CLI. So I did. Having quite strict guardrails how to interact with the AI was definitely a big plus in the beginning. I scaffolded about 2-3 apps which I had in mind for a long time and was quite impressed with the approach and results.
Being restricted to Gemini CLI and their models was not an option for me, so I tried different ports of conductor with Codex & Claude.
Ultimately I ended up being annoyed by the amount of interventions Conductor has. For small bugfixes I don’t want to have the full bloat of creating a spec & implementation plan, with 3 steps of user verification. The agent should just fix it and come back once its down.
With that first glimpse of frustration I started to just use opencode without any spec driven approach. Again I scaffolded different projects, although these were smaller ones.
What you can clearly notice is:
- scaffolding projects with a spec driven tool works way better than doing it freestyle
- Its way easier to consistently ship the same quality when using a spec driven approach.
- docs as part of the repo would help your new teammate as well, so they do for an agent.
Right now I am working on my own lightweight, doc-driven approach which aims to find a balance between on the fly prompting and serious feature planning.
Local First
I tried around with managing issues in Linear and achieve some kind of asynchronous, maybe cloud based workflows in the future, where I can just pull up a linear issue and let some kind of cloud agent develop it on the fly.
For sure this might open several advantages, but I currently see too many roundtrips being added compared to just run stuff on your machine.
Having AI agents work in the cloud in some sandbox feels to me like maintaining CI pipelines. You do stuff, you wait, build is red, you do stuff, you wait.
What will happen if you’re not happy with the result of the agent? You will need to pull the branch, maybe even find out how to run what the agent just did. You will reply in a GitHub Issue, the agent will reply / ask question.
It will happen asynchronously compared to just having vscode opened anyways and the agent replying relatively synchronous in your command line.
Maybe I am not getting the full picture or I’m too scared of giving a way the pseudo feeling of having control if stuff is happening on my machine, not sure.
As of now, I will keep my agentic workflows local first 🙂
End of note
End of note. Start of the next rabbit hole.
If this resonated, there is more where this came from.
Previous
Removing GateKeeper from VST Plugins
Next
→2026 Reading Recommendations (so far)