Prompt Library

A Prompt Library is essentially a curated collection of well crafted prompts designed for use with Generative AI models. Think of it as a toolbox filled with ready-to-use instructions and
examples that can help you get more specific, creative, or effective outputs. Instead of starting from scratch every time, you can browse and select prompts tailored to various tasks, from generating creative text formats to summarizing information or even translating languages. These libraries often categorize prompts by use case, making it easier to find the perfect starting point for your needs.

 

Prompt Libraries are not only useful for sharing prompts across your organization. It also helps in adoption and inspiration. Users of the Prompt Library might get new ideas for new prompts inspired by other users prompts that are uploaded. Finally, prompt libraries can assist with version control—ensuring that evolving prompts remain traceable, testable, and reusable over time.

Prompt Component Library

Prompts are frequently highly specific, which can result in prompt libraries quickly becoming outdated. Many organizations that maintain such libraries find themselves with thousands of unused prompts. While these libraries may offer inspiration to users, it is worth considering the overall value they truly provide.
Due to the rapid pace at which models change, prompts stored in these libraries can become outdated. Techniques embedded within prompts, such as Chain of Thought, may become less relevant or unnecessary with the introduction of reasoning models. When multiple prompts rely on techniques or formats that need updates, maintaining the library’s usefulness for users becomes challenging.

Prompt Components

Drawing from Software Engineering, prompts can be divided into components to increase reusability. This modular method enables users to access and combine different parts as needed for their prompts.
Prompt Components are elements of prompts designed for seamless integration into a Prompt Recipe. By adhering to the Crafting AI Prompts Framework, you can develop prompts that align with the framework’s structure, segment them according to the “CRAFT” components, and store each component individually.
If you create a register (R: Register) for social media content, test cases, or code, you can store it in a library to reuse across multiple prompts. This way, you only need to define it once for repeated use.
The use of variables is common in prompt components. For instance, adding a variable allows users to include specific context for social media posts. When using the prompt, users can replace the variable with their desired data rather than creating a new prompt each time. Some tools offer interfaces where users can enter variable values directly, then copy the completed prompt or use it within the tool. This process is designed to make the use of different components more efficient.

 

Utilizing components from a library rather than complete prompts enhances maintainability. As models are updated and prompts require adjustments to optimize performance with
these new models, only the relevant component needs to be revised, ensuring that all associated Prompt Recipes reflect the latest changes efficiently.

Prompt recipes

After creating prompt components, they can be reused as building blocks or copied individually into the desired model. Alternatively, users can create Prompt Recipes for tasks that are performed regularly, such as content creation for social media or testing specific cases.
Prompt Recipes are made by combining Prompt Components such as Context, Register, Acting Role, Format, and Task, typically one from each category. Following the Crafting AI Prompts Framework, these components make up a “Recipe” that users can copy as a whole, rather than moving each piece individually into their chosen tool.
Next to this, mostly additional information to prompt recipes is stored, such as to which models they are tested or perform as expected, a version number, and sometimes even version control for even better maintainability.

Figure. Prompt Recipes.

Prompt Evals, Testing and Maintainability

Prompt Evaluations (evals) allow for more straightforward testing and maintenance of prompts. Prompt Evals are evaluations specifically designed to assess how well a language model responds to given prompts, the text or instructions you input. They’re often used to measure and improve model performance, especially in tasks like reasoning, following instructions, accuracy, tone, safety, or domain-specific output.
Prompt Evals are most applied to recipes as they give the full picture. If a recipe does not produce the intended output, such as after a model update, updating only the affected prompt component is necessary for all related prompt recipes. This makes testing and maintaining prompts much easier, definitely when this is integrated within the Prompt Component Library.