Best Practices for amplified coding

Flushing Code

Many developers adopt a “refactor” mindset. When working on a feature and encountering messy code, they often refactor it to make it cleaner and more maintainable. This practice is beneficial
in traditional coding. Generative AI, however, is less adept at refactoring code; it excels in flushing code. Flushing code involves removing the current implementation and creating a new version with the same features. So, when the right model is selected, and context has been added, be aware that adding to your instruction to “refactor” the code is not always the best option.
The latest models perform this task by default. If they encounter issues that cannot be resolved, they create a .bak file (backup file) and start a fresh implementation in the main file. This process effectively flushes the old code and rewrites it completely. This approach avoids retaining outdated code and leverages the model’s strength in generating fresh code. Additionally, developers can guide the model to execute this process from the beginning by requesting the tool to create a .bak file of the current file and implement the changes anew.
Adopting this approach provides additional advantages, such as the ability to address the latest security issues and incorporate contemporary coding techniques directly. This method enhances the quality during the refactoring process or, in this instance, the code flushing procedure.
Recognizing the appropriate times to flush code or undertake code refactoring is essential. Developers should gain experience with these processes through practical application, and their approach will evolve over time due to the rapid development of models.

Version Control

When using AI agents to generate code, the volume of changes introduced into an application can grow rapidly. With that speed comes increased risk, mistakes can happen, and they can be hard to track. That’s why version control systems like Git are not just useful, they’re essential; they allow teams to revert to a stable version whenever something goes wrong.
Many “vibe coding” tools don’t integrate version control by default, making it dangerously easy to lose track of working code. There have been real-world cases where vibe coders coded with AI tools for months, only for the application to break completely, and because there is no proper versioning, those months of work were lost.

IDEs can be integrated with version control systems to seamlessly incorporate them into your development workflow. Additionally, generative AI can assist by generating commit messages that
adhere to guidelines established by your team.
The good news: most developers already use version control in their workflow. But if that’s not the case, now’s the time to set it up properly. It’s a small step that protects your team from big setbacks.

Amplified Platforms

Next to Generative AI functionalities within IDE’s, you can find Generative AI in the complete Software Development Lifecycle (SDLC). Most of the tools integrated in the IDE develop new features within the SDLC by connecting different platforms using for example MCP or directly within the platform.

Figure. Amplified quality engineering supports all activities.


One of these examples is Github, a platform used by many development teams to share and collaborate on code repositories effectively. Github, for example, already has options to log issues, create Pull Requests, set up actions or even manage your projects. But with the integration of Github Copilot within the Github platform, new possibilities arise.


For example, it is now possible to create an issue or user story within the platform and assign it to Github Copilot. Github Copilot will start analyzing the issue and create a new branch and Pull Request in the background (as a background agent) with the solution to the issue. A developer can then review the code, or refine it by checking out the branch within their IDE, or just reviewing the code and assigning Github Copilot back to it once again within the platform.

Background Agent(s)

Next to background agents in amplified platforms, agents are also introduced within your IDE. When planning files, rule files or reasoning models are used appropriately, it may take some time for an agent to complete the tasks specified. Previously, it was not possible to perform other work within the IDE while the agent was running. This limitation has been addressed with the introduction of background agents, which allow developers to assign tasks to an agent that operates in the background. As a result, developers can continue their work in the IDE and review the agent’s output once the tasks have been completed.

Agentic Swarms

The discussion so far has included agents within an IDE, as well as background agents that can be started either on amplified platforms or directly in the IDE. Another form of agentic programming involves Agentic Swarms. In this context, “agentic” refers to multiple agents operating in the background or under human supervision. Agentic Swarms consist of several agents working collaboratively on one or more tasks in the background each with its own specialism. Each agent has a defined set of tools, data and memory available based on what it needs to perform the (set) of tasks. For example, a user story created in Github may be handled by a swarm of agents where one refines the user story, another develops the software, one to write documentation and one to perform testing. Humans orchestrate those agents and refine them based on their needs. A simple example of how this is already used, is by creating a planning file (issue) in GitHub. You assign this issue to Claude Code, a coding agent running in your terminal, and let it build the feature.
Claude code pushes the changes to a new branch in Git and creates a pull request. GitHub Copilot starts a review based on this pull request, assisting here as the “LLM as a judge” as described in
Section 29.1. The review comments are returned to Claude Code in a planning file (for example, reviewComments.md). Claude Code will review these comments, make any necessary fixes in the
code, and push the updates back to Git. Finally, a human expert reviews the code, tests it, makes any fixes if needed, and merges it. Since Claude Code runs in the terminal, multiple instances can
work in parallel on different branches.

Figure. Agentic Swarms.