In this video, the user will present some guidelines for prompting to help you get the results that you want, in particular, it should go over two key principles for how to write prompts to prompt engineers effectively. A little bit later, when she’s going over the Jupiter notebook examples, I’d also encourage you to feel free to pause the video every now and then to run the code yourself so you can see what the output is like and even change the exact prompts and play with a few different variations to gain experience with what the inputs and outputs are like.
So, I’m going to outline some principles and tactics that will be helpful while working with language models like ChatGPT. First, I’ll go over these at a high level, and then we’ll apply the specific tactics with examples throughout the entire course.
The first principle is to write clear and specific instructions. You should express what you want the model to do by providing instructions that are as clear and specific as you can possibly make them. This will guide the model towards the desired output and reduce the chance of irrelevant or incorrect responses. Don’t confuse writing a clear prompt with writing a short prompt because, in many cases, longer prompts actually provide more clarity and context for the model, which can lead to more detailed and relevant outputs.
The first tactic to help you write clear and specific instructions is to use delimiters to clearly indicate distinct parts of the input. Delimiters can be any clear punctuation that separates specific pieces of text from the rest of the prompt. These could be triple backticks, quotes, XML tags, section titles, or anything that makes it clear to the model that this is a separate section.
Using delimiters is also a helpful technique to avoid prompt injections. Prompt injection occurs when a user is allowed to add input into your prompt, and they might give conflicting instructions to the model, which can make it follow the user’s instructions rather than doing what you wanted it to do.
The second tactic is to ask for a structured output. To make passing the model outputs easier, it can be helpful to ask for a structured output like HTML or JSON. This allows you to easily parse and manipulate the output in your code.
The third tactic is to ask the model to check whether conditions are satisfied. If the task makes assumptions that aren’t necessarily satisfied, you can tell the model to check these assumptions first. This can help avoid unexpected errors or results.
The fourth tactic is to use few-shot prompting. This involves providing examples of successful executions of the task before asking the model to do the actual task. This helps the model understand the desired output and improves its performance.
The second principle is to give the model time to think. If a model is rushing to an incorrect conclusion, you should reframe the query to request a chain or series of relevant reasoning before the model provides its final answer. This allows the model to work out its own solution and reduces the chance of incorrect responses.
Some tactics for giving the model time to think include specifying the steps required to complete a task, instructing the model to work out its own solution before rushing to a conclusion, and asking the model to check its solution against a known correct solution.
It’s important to keep in mind the limitations of language models. While they have been trained on a vast amount of knowledge, they may still generate incorrect or fabricated information. This is known as hallucination. To reduce hallucinations, you can ask the model to find relevant quotes from the text and use those quotes to answer questions.
In conclusion, these guidelines for prompting will help you get the results you want from language models. By writing clear and specific instructions, giving the model time to think, and being aware of the model’s limitations, you can improve the quality of the model’s responses.