Welcome to this course on ChatGPT Prom Engineering for Developers. I am thrilled to have with me Isa, who is a member of the technical staff of OpenAI and has built the popular ChatGPT retrieval plug-in. She has also contributed to the OpenAI Cookbook that teaches people prompting. I am thrilled to be here and share some prompting best practices with you.
There has been a lot of material on the internet for prompting with articles like ‘30 Prompts Everyone Has to Know’. However, most of it has been focused on the ChatGPT web user interface, which is used for specific and often one-off tasks. But the power of large language models as a developer lies in using API calls to quickly build software applications.
In this course, we will share with you some of the possibilities of what you can do with language models as well as best practices for using them. We will cover prompting best practices for software development and common use cases such as summarizing, inferring, transforming, and expanding. Finally, you will have the opportunity to build a chatbot using a language model.
There are two types of language models: base language models (Bas LM) and instruction-tuned language models (instruction-tuned LM). Bas LM is trained to predict the next word based on text training data, while instruction-tuned LM is trained to follow instructions. Instruction-tuned LM is recommended for most practical applications as it is easier to use and safer.
When using an instruction-tuned LM, think of giving instructions to another person who is smart but doesn’t know the specifics of your task. Clear and specific instructions are important, specifying the tone and style of the text. Giving the LM time to think is also crucial.
I hope this course will spark your imagination about new applications that you can build using language models. Let’s dive into the world of language models and software development!