Find your template, checklist or other download to help you in you tasks.
Find your way to be trained or even get certified in TMAP.
Start typing keywords to search the site. Press enter to submit.
In traditional programming, products are typically developed based on user stories. Product Owners or other stakeholders create user stories, which are then discussed with the project team (developers), who analyze and refine them and finally convert them into working code, test it, and deploy the new features.When working with Generative AI, the role of user stories shifts from guiding human developers to guiding the AI itself. Instead of writing user stories for developers, teams now create planning files: structured prompts or inputs designed specifically for Generative AI. These planning files function like user stories, but for AI, helping it generate solutions in collaboration with a domain expert.
In this new approach, user stories are no longer just for human understanding, they become part of the input that trains or instructs AI to produce usable outputs. Alongside these planning files, (global) rule files can be set: they define consistent coding standards, design patterns, or team conventions, ensuring AI’s output aligns with agreed-upon practices, much like a codingstyle guide would for a human team.
Since AI assistants and agents are integrated in your IDE and present in most commonly used IDEs, it is integrated into existing workflows. Pipelines and other quality measurement toolsremain functional when using these integrations. As Generative AI becomes incorporated into more tools and platforms, its availability in standard workflows is expected to increase, supporting tasks in accordance with current practices.
Most development teams have coding style guides, which define how code should be written and what constitutes the expected level of quality. These guidelines can also be applied to GenerativeAI coding tools. This can be achieved with “Global Rule Files” or “Rule Files” as extension on current measurements. These are simple .md files that clearly explain to the model how it should act, respond, and behave in natural language. For example, you can clearly define which programming languages are used within the project, how code should be written (clean code practices), the architecture setup and boundaries or even how the project is structured, and how it should structure files.These rule files can be shared across teams and team members. So, everyone within the product team has the same starting point, and all models will be guided the same way. Product teams should create rule files at the start of using these models.
With the foundational setup being complete, the next step is to begin using agents within your IDE. There are multiple approaches available for utilizing Generative AI in coding tools integrated into your integrated development environment (IDE).
You can use the chat, via text or voice, to guide the model and build features while conversing, use auto complete in your editor or you can, for example, refer to planning files via the chat.Planning files are simple .md files that guide the model in implementing a specific feature using natural language. You outline what and how it should be built. You can compare it with user stories, but for Generative AI models this approach is also called “Spec-Driven Development.”
Spec-Driven Development (Specification-Driven Development) is a software development approach where the creation of specifications (or “specs”) comes before writing the actual code. These specifications clearly define how a system or component should behave, often in a way that is understandable by both developers and stakeholders.
A recommended approach is to add a step-by-step guide for the model to follow, to implement and test the feature at the end of the file. Additionally, it is advisable to ask the model to sign off any step it completed, using markdown within the planning file. This allows you to easily track its progress, and if anything goes wrong (the model gets stuck, or you want to continue later), you can simply ask the model to resume the work on your planning file. It’s also important to ask the model to update the planning file if changes are made outside of it, to ensure it stays aligned with the actual implementation and remains useful for later documentation.Planning files can be stored within the repository, and when named correctly, they can serve as valuable documentation for both developers and testers to understand what features are built and why, as well as for the model to reference later if new features interact with previous implementations.
Creating a planning file is straightforward. Some modern tools include this functionality, or users can request the creation of a plan.md or planning.md file. This file provides an outline based on initial expectations, which can then be further developed according to specified requirements.While some modern tools include this feature by default, it’s best to validate or edit the generated plan yourself before execution. If your request is unclear or too generic, the tool may make assumptions and proceed based on its own interpretation rather than your exact requirements.
In addition to providing natural language instructions for the agent, it is recommended to include relevant and rich context. When developers specify which files need to be modified, wherechanges should occur, or what files should be analyzed, the relevant files can often be shared directly with the agent by attaching them in the chat. This can typically be done by selecting the desired files within the chat interface, or by referencing them using the @ or / symbol. These actions usually open a modal allowing selection or search for the necessary file. Providing this level of detail helps the agent identify which files to analyze or modify, thereby supporting more accurate results. This method should also be used for Planning files, rather than naming them directly.Images such as screenshots or designs may also be shared. Sharing these visuals can assist the model or agent in understanding the intended outcome or in constructing the specified design.
Most tools let you choose from several models before sending the prompt to the agent, with or without a planning file. Commonly available models include those from Anthropic (Claude), OpenAI (GPT), Google (Gemini), and others, each with particular characteristics that may make them more suitable for certain types of tasks. Some models generally perform better on frontend-related tasks, while others are more effective for backend-related tasks or documentation. Depending on whether a task is abstract or clearly defined, it may be beneficial to use a reasoning-oriented model or one without such capabilities. Additionally, models vary in cost; selecting a model can influence overall expenses. Costs are typically determined by tokens, although many integrated development environment tools, like Github Copilot or Windsurf, charge through monthly subscriptions or credit-based systems.
Examples of these tools include Lovable, Bolt.new, Base44, V0, among others. Similar to ChatGPT, these platforms typically feature an input field where users can enter prompts. Upon execution,the platform generates a plan (such as a planning.md file) and initiates development within the environment. Some tools provide functionality for reviewing and editing the generated code. However, most users primarily employ a prompt-driven workflow to develop applications using these solutions.
In some cases, companies may choose to serve their own models. often open-source, for reasons such as data protection, compliance, or infrastructure control. These self-hosted models can be integrated into IDEs as well, offering more flexibility and privacy, though often requiring additional setup and maintenance.Given the rapid pace at which models are released and their evolving performance characteristics, this book does not include specific models. We recommend consulting up-to-date benchmarks and conducting your own experiments over time to identify the most suitable model for your particular task. Choosing the suitable model for a specific task is an essential part of the revised workflow.
It can be challenging to specify necessary changes within a repository, and creating a planning file may not always be feasible. Developers frequently require additional time to thoroughly investigate tasks. In the case of bugs, for example, it is necessary to first determine the root cause and identify the appropriate solution. Generative AI can be highly effective in this context, assisting by articulating the nature of the problem and utilizing various tools to analyze issues, as illustrated in the “Short Feedback Loop” example below. Generative AI can conduct issue analysis, collect information on potential problems, and compile findings into a planning.md file or in the chat. A developer can then review these results and decide on the subsequent course of action for the agent.
Agents can also support writing documentation by leveraging their understanding of the codebase alongside the planning file they collaborate on with developers. This enables them to generate or update documentation for elements such as API endpoints, components, and even the README.md file, based directly on the planned changes, rather than inferring modifications from the code itself.
When clear rules are defined in the Rule Files, this process can be automated. As changes are executed based on the planning file, the agent can update the relevant parts of the documentationaccordingly. This ensures that documentation evolves in step with the development process, reducing manual effort and keeping information accurate and current.By integrating documentation into the agent-assisted workflow, teams benefit from better knowledge sharing, fewer outdated docs, and a smoother overall development experience.
In addition to generating documentation, Generative AI can play a valuable role in the architectural process. For instance, an AI agent can identify the current system state and visualize it using tools like PlantUML, Mermaid or C4 models. This visualization can serve as a starting point for architects to gain insight into the existing scope and build upon it. It can also be used as input for planning files, providing additional context to help the model implement new features more effectively. Additionally, questions can be asked about the current state, to which the agent can respond with feedback in real time. AI can even assist in drafting Architecture Decision Records (ADRs) and performing impact analysis to evaluate the effects of proposed changes. Generative AI can also support visualizing processes using BPMN (Business Process Model and Notation), a standard widely used by many organizations. This further aids in architectural clarity and collaboration, especially when communicating designs to stakeholders with varying levels of technical expertise.Moreover, as Generative AI becomes integrated into more products, particularly multimodal products that support both textual and visual input and output, these tools will increasingly deliver AI-powered insights that enrich the architectural process. The possibilities are endless, and this is just one way to inspire you to consider how Generative AI can be integrated into your own architectural workflows.
In addition to these capabilities, the latest tools are introducing even more advanced features that can enhance the team’s workflow. For instance, an integrated browser now allows both developers and testers to directly evaluate the built frontend. Users can explore the application locally, in a familiar manner, but with this browser linked to the IDE. This connectivity enables users to click on specific elements within the application and provide this context directly to their AI agent. Furthermore, logs (console logs or errors) can be utilized in this way, as well as taking screenshots from the application directly through the browser and feeding them into the chat tab.For instance, suppose a dropdown in the navigation bar is misaligned in terms of styling. By using the selection tool in the browser within the IDE, this element can be highlighted, and the context is automatically placed into the chat. Also, when console errors are shown within the inspection tab of the browser, these can directly be added as context to the chat using these functions. Developers or testers can then specify what needs improvement. This process eliminates the need for providing a full description of the context manually, as the AI supports capturing it effectively. Once the prompt is sent, the agent takes over and works on resolving the issue.
Furthermore, Generative AI can review newly submitted code within Pull Requests, allowing developers to focus on development while receiving immediate feedback during or after the codingprocess. Since developers typically spend a significant amount of time reviewing code, this can reduce the overall time spent on the process, freeing up time for other valuable activities. Tip: You can save this feedback as a planning.md file, to hand over to Generative AI for addressing the review comments.
Overview