Find your template, checklist or other download to help you in you tasks.
Find your way to be trained or even get certified in TMAP.
Start typing keywords to search the site. Press enter to submit.
In addition to the framework, several techniques can be used to enhance output. Knowing when and why to use these methods can improve prompt quality and results.
When prompting, understanding Zero, Single, and Few-Shot prompts is important. These terms refer to the amount of task-specific information provided to the model. The modelswe’re working with, are trained on a vast amount of data. However, there might be cases where you refer to data that is not available to the model. Providing examples can help guidethe model toward a specific direction but may limit its creativity. Adding more examples generally increases the accuracy of results, within the constraints of the model’s context window. Conversely, including zero or only one example can result in responses with greater variability or creativity.
Zero-Shot Prompting is a technique where a language model is asked to perform a task without being shown any examples beforehand. Instead, the prompt includes only a clear instruction or question, relying on the model’s pre-trained knowledge to understand what is being asked and generate an appropriate response. This approach tests the model’s ability to generalize and apply learned concepts to new tasks without additional guidance. It’s especially useful when working with unfamiliar tasks or when keeping prompts concise is important.
Single-Shot Prompting is a technique in which a language model is given a single example to demonstrate the desired task or output format. This example helps the model better understand the context, style, or structure required, leading to more accurate and relevant responses compared to Zero-Shot Prompting. For instance, a user might share a blog post they’ve written, and the model uses it to learn the user’s register (tone, language,and structure), allowing it to generate new content that closely matches the original style. This approach is especially effective when the task depends on a specific voice, format, orpattern that the model might not infer from instructions alone.
Few-Shot Prompting builds on the principle of Single Shot Prompting by providing the model with a small number of examples (typically between two and five), to illustrate the task.These multiple examples offer the model richer context, helping it generalize better and produce more accurate and consistent responses. For instance, if a user shares several of their blog posts, the model can learn the overall register (such as professional, conversational, or persuasive tone) rather than focusing too heavily on specific word choices or metaphors from a single piece. This allows the model to better align with the user’sintended voice, making it especially effective for tasks where tone, structure, or nuanced communication style are critical.
Chain of Thought Prompting (CoT Prompting) [Wei et al. 2022] is a technique that encourages language models to generate step-by-step explanations before delivering a final answer. Thisapproach helps the model produce more accurate and interpretable outputs, especially for tasks involving logic, math, or multi-step reasoning.
Before diving into Chain of Thought Prompting, it’s helpful to contrast it with simpler input-output prompting, where a single instruction is given without encouraging step-by-step reasoning. While input-output prompting can be effective for straightforward tasks, it often falls short in situations requiring more complex or multi-step thinking.
By including instructions like “Elaborate in detail,” users can prompt the model to fill the context window with intermediate reasoning steps. Importantly, the model is not truly reasoning in the human sense; it is generating plausible reasoning patterns based on its training data. These “chains of thought” simply populate the prompt with relevant context, improving the likelihood of a correct response through more effective pattern completion.
When no examples are provided, this is called Zero-Shot Chain of Thought Prompting. While helpful, it’s often more effective to include a few examples showing the desired reasoning process (known as Few-Shot CoT Prompting) as this gives the model a clearer reference for how to respond.
Although CoT prompting can significantly improve output quality, especially for complex problems, many modern models include built-in reasoning capabilities. These models already simulate such reasoning behaviors by default, so adding manual CoT instructions is often unnecessary and may even interfere with optimal performance.
Self-Consistency Prompting [Wang et al. 2022] is an extension of Chain of Thought (CoT) prompting that improves the reliability and accuracy of model outputs by aggregating multiple reasoning paths instead of relying on a single response. Rather than asking the model to solve a problem once, this approach prompts the model multiple times with the same question. By collecting and comparing the final answers across these different reasoning traces, the most frequent or consistent outcome is selected, typically using majority voting. This process increases the likelihood of arriving at a correct and stable answer by filtering out outliers and occasional reasoning errors that can occur in a single pass.
The model does not evaluate or refine its own outputs. Instead, Self-Consistency utilizes the model’s variability to generate different reasoning paths and then identifies the most frequent result as the likely answer. This process improves performance through statistical aggregation rather than by increasing understanding.
In addition, prompting the model to explore diverse reasoning paths can be beneficial for uncovering edge cases and alternative perspectives. By converging these different lines of reasoning, the model is more likely to produce a well-rounded and comprehensive answer.
Overview