Recapping early Aider installation and agent-assisted specs
One of the more helpful articles I founds a while back was https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/. It described workflows for both greenfield and maintenance work. It had practical prompt examples and pointed me in the direction of initially using Aider with both Anthropic APIs and local LLMs (Qwen)
It solidified the idea of using conversational vs reasoning models for generating ideas, moving them to specs, then generating an implementation plan. I experimented some with having agents then generate issues from the planning document. That script was moved into the infra repo.
Below are the example prompts slightly modified from the article:
Idea generation:
Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer. Let’s do this iteratively and dig into every relevant detail. Remember, only one question at a time.
Here’s the idea: An application that tracks the growing of seeds into plant starts. Basically a micro-propagation application for home users. The application should be use Java as the back-end and Javascript and React for the front-end. Now that we’ve wrapped up the brainstorming process, can you compile our findings into a comprehensive, developer-ready specification? Include all relevant requirements, architecture choices, data handling details, error handling strategies, and a testing plan so a developer can immediately begin implementation.
Planning:
Draft a detailed, step-by-step blueprint for building this project. Then, once you have a solid plan, break it down into small, iterative chunks that build on each other. Look at these chunks and then go another round to break it into small steps. review the results and make sure that the steps are small enough to be implemented safely, but big enough to move the project forward. Iterate until you feel that the steps are right sized for this project.
From here you should have the foundation to provide a series of prompts for a code-generation LLM that will implement each step. Prioritize best practices, and incremental progress, ensuring no big jumps in complexity at any stage. Make sure that each prompt builds on the previous prompts, and ends with wiring things together. There should be no hanging or orphaned code that isn't integrated into a previous step.
Make sure and separate each prompt section. Use markdown. Each prompt should be tagged as text using code tags. The goal is to output prompts, but context, etc is important as well.