Guidelines for Prompting: How to Get the Results You Want

Guidelines for Prompting: How to Get the Results You Want

In this video, the user will present some guidelines for prompting to help you get the results that you want, in particular, it should go over two key principles for how to write prompts to prompt engineers effectively. Later, when she’s going over the Jupiter notebook examples, she’d also encourage you to feel free to pause the video every now and then to run the code yourself so you can see what the output is like and even change the exact prompts and play with a few different variations to gain experience with what the inputs and outputs are prompting are like.

So, let’s dive into the first principle, which is to write clear and specific instructions. You should express what you want a model to do by providing instructions that are as clear and specific as you can possibly make them. This will guide the model towards the desired output and reduce the chance of irrelevant or incorrect responses. Longer prompts actually provide more clarity and context for the model, which can lead to more detailed and relevant outputs. One tactic to help you write clear and specific instructions is to use delimiters to clearly indicate distinct parts of the input. Delimiters can be any clear punctuation that separates specific pieces of text from the rest of the prompt.

Another tactic is to ask for a structured output, such as HTML or JSON format. This makes it easier to process the model’s output and ensures a standardized format for further use. You can also ask the model to check whether certain conditions are satisfied before providing a final answer. This helps avoid incorrect responses by instructing the model to verify assumptions or handle potential edge cases.

The second principle is to give the model time to think. If a model is rushing to an incorrect conclusion, you can reframe the query to request a series of relevant reasoning steps before the final answer. This allows the model to work out its own solution and reduces the chance of reasoning errors. Asking the model to think longer about a problem can lead to more accurate responses.

However, it’s important to keep in mind the limitations of language models. While they have been trained on a vast amount of knowledge, they may still produce fabricated ideas or hallucinations. These are plausible-sounding but incorrect responses. To mitigate this, you can ask the model to find relevant quotes from a text and use those quotes to answer questions, ensuring traceability to the source document.

In conclusion, following these guidelines for prompting can help you get the results you want from language models. Remember to write clear and specific instructions, give the model time to think, and be aware of the model’s limitations. It’s an iterative process, so feel free to experiment and refine your prompts to achieve the desired outcomes.

A High Seas Adventure
Older post

A High Seas Adventure

Newer post

Monetizing Your Expertise: Selling ChatGPT Proms on Online Marketplaces

Monetizing Your Expertise: Selling ChatGPT Proms on Online Marketplaces