In this video, EA will present some guidelines for prompting to help you get the results that you want. The video will go over two key principles for how to write prompts effectively. Later, when going over the Jupiter notebook examples, it is encouraged to pause the video and run the code yourself to gain experience with different variations of prompts.
The first principle is to write clear and specific instructions. It is important to express what you want the model to do by providing instructions that are as clear and specific as possible. This will guide the model towards the desired output and reduce the chance of getting irrelevant or incorrect responses. Writing longer prompts can provide more clarity and context for the model.
The second principle is to give the model time to think. If a model is making reasoning errors by rushing to an incorrect conclusion, it is helpful to reframe the query and request a chain or series of relevant reasoning before the model provides its final answer. This allows the model to work out its own solution and reduce the chance of incorrect responses.
It is important to keep in mind the limitations of the model. While the model has been exposed to a vast amount of knowledge, it may not perfectly memorize the information. It can make up plausible-sounding but incorrect responses, which are called hallucinations. To reduce hallucinations, it is recommended to ask the model to find relevant quotes from the text and use those quotes to answer questions.
These guidelines will help you effectively prompt the model and improve the quality of the responses you receive. Remember to consider the reader’s perspective and optimize the content for clarity and understanding.