An Introduction To Different Prompting Techniques

Manish Poddar
4 min readJan 6, 2024

--

Prompt engineering is an emerging field that involves carefully designing and optimizing prompts to get the most effective performance out of large language models (LLMs). As LLMs like GPT continue to advance in their capabilities, the way we prompt them is becoming more and more important. Recent research has demonstrated that properly engineered prompts can greatly improve reliability and achieve more complex tasks than previously thought possible.

In this blog post, I will cover different prompting techniques that go beyond basic examples. We will look at methods that allow us to better control LLMs, reduce errors, and improve results for difficult problems. Whether you are an AI researcher experimenting with prototypes or a business leader looking to implement LLMs, understanding these prompt engineering best practices is key to success. By the end, you’ll have practical guidance on how to create different effective prompts and unlock greater value from this exciting new technology.

  1. Zero-shot prompting : This technique allows large language models (LLMs) to perform new tasks without any prior examples or understanding of that task. The way it works is through a method called “prompting” — you provide the LLM with a natural language description of what you want it to do. For instance, You could prompt an LLM by saying “write a blog post paragraph about zero-shot learning ”. The model will use the existing knowledge base and will write a pragraph on zero shot learning.
Fig 1. Zero Shot Prompting (Image Source: https://arxiv.org/pdf/2310.14735.pdf)

2. Few-Shot Prompting : As the name suggests, few shot prompting refers to machine learning models that can learn concepts from a single/few examples. These are relatively used for solving simple tasks.

Fig 2 : Few Shot Prompting (Image Source: https://arxiv.org/pdf/2310.14735.pdf)

3. Chain of Thought Prompting : Chain of thought prompting is a technique that allows AI systems to break down complex tasks into more manageable reasoning steps. Instead of trying to solve a difficult problem all at once, chain of thought prompting encourages to explain reasoning process by decomposing the solution into a series of incremental steps. It starts by clearly defining the end goal, then think through the logical prerequisites and sub-tasks needed to ultimately achieve that goal. This might involve gathering necessary information, making key assumptions, or simplifying parts of the problem. By methodically walking through this reasoning chain, This can tackle very challenging tasks that would otherwise be beyond current human capabilities if faced with them as monolithic challenges. It also lays the foundation for other related prompting techniques that further separate out task decomposition from task execution. It provides a structured way for AI systems to be more transparent about their internal reasoning.

Fig 3 : Chain of Thought Prompting (Image Source : https://arxiv.org/abs/2205.11916 )

4. Self-Consistency : In this technique the idea is to generate multiple, diverse reasoning paths when the AI system is given a few examples to learn from. For instance, when presented with an arithmetic word problem, the system would explore different ways to set up and solve the equations so as to verify that the end result the final answer is based on the maximum vote received for the correct anaser.

Fig 4: Self Consistency (Image Source : https://arxiv.org/abs/2205.11916 )

5. Tree of Thoughts: It is an exciting new technique that shows promise for creative writing and problem-solving tasks. It works by building a tree structure where each node represents a coherent thought or idea that serves as an intermediate step towards the final solution or output. For creative writing tasks like ad copy generation, tree of thoughts can help explore different angles and perspectives by branching out ideas. It can start from a high-level messaging strategy and progressively break it down into more concrete realizations of that strategy. Similarly for mathematical reasoning or crosswords, the tree can layout the logical deduction steps to narrow down on the right path to the end goal. Overall, by explicitly mapping out the reasoning steps instead of relying completely on opaque neural model internals, tree of thoughts offers more transparency, control and efficiency for the user in steering the text generation process. As the technique matures, we can expect more complex trees translating to more sophisticated and creative end results.

Fig 5: Tree of thoughts prompting (Image Source : https://aclanthology.org/2023.rocling-1.33.pdf)

Summary:
The blog post provides an introduction to various prompting techniques that can be used to optimize performance of large language models (LLMs). It covers zero-shot prompting which allows LLMs to perform new tasks without examples by providing a natural language description. Few-shot prompting uses just a few examples for simpler tasks. Chain of thought prompting breaks down complex tasks into more manageable reasoning steps to improve transparency. Self-consistency generates multiple diverse reasoning paths to verify the final answer. Tree of thoughts builds a tree structure to map out reasoning steps for improved transparency, control and efficiency in text generation. The post highlights how properly engineered prompts can unlock greater value from LLMs by improving reliability, achieving more complex tasks, reducing errors and controlling the models better. Understanding these prompting best practices is key for both AI researchers and business leaders looking to implement LLMs.

--

--

Manish Poddar
Manish Poddar

Written by Manish Poddar

Machine Learning Engineer at AWS | Generative AI | MS in AI & ML, Liverpool John Moores University | Solving Data Problem

No responses yet