TL;DR (by GPT-4 🤖):
Prompt Engineering, or In-Context Prompting, is a method used to guide Language Models (LLMs) towards desired outcomes without changing the model weights. The article discusses various techniques such as basic prompting, instruction prompting, self-consistency sampling, Chain-of-Thought (CoT) prompting, automatic prompt design, augmented language models, retrieval, programming language, and external APIs. The effectiveness of these techniques can vary significantly among models, necessitating extensive experimentation and heuristic approaches. The article emphasizes the importance of selecting diverse and relevant examples, giving precise instructions, and using external tools to enhance the model’s reasoning skills and knowledge base.
Notes (by GPT-4 🤖):
Prompt Engineering: An Overview
- Introduction
- Prompt Engineering, also known as In-Context Prompting, is a method to guide the behavior of Language Models (LLMs) towards desired outcomes without updating the model weights.
- The effectiveness of prompt engineering methods can vary significantly among models, necessitating extensive experimentation and heuristic approaches.
- This article focuses on prompt engineering for autoregressive language models, excluding Cloze tests, image generation, or multimodality models.
- Basic Prompting
- Zero-shot and few-shot learning are the two most basic approaches for prompting the model.
- Zero-shot learning involves feeding the task text to the model and asking for results.
- Few-shot learning presents a set of high-quality demonstrations, each consisting of both input and desired output, on the target task.
- Tips for Example Selection and Ordering
- Examples should be chosen that are semantically similar to the test example.
- The selection of examples should be diverse, relevant to the test sample, and in random order to avoid biases.
- Instruction Prompting
- Instruction prompting involves giving the model direct instructions, which can be more token-efficient than few-shot learning.
- Models like InstructGPT are fine-tuned with high-quality tuples of (task instruction, input, ground truth output) to better understand user intention and follow instructions.
- Self-Consistency Sampling
- Self-consistency sampling involves sampling multiple outputs and selecting the best one out of these candidates.
- The criteria for selecting the best candidate can vary from task to task.
- Chain-of-Thought (CoT) Prompting
- CoT prompting generates a sequence of short sentences to describe reasoning logics step by step, leading to the final answer.
- CoT prompting can be either few-shot or zero-shot.
- Automatic Prompt Design
- Automatic Prompt Design involves treating prompts as trainable parameters and optimizing them directly on the embedding space via gradient descent.
- Augmented Language Models
- Augmented Language Models are models that have been enhanced with reasoning skills and the ability to use external tools.
- Retrieval
- Retrieval involves completing tasks that require latest knowledge after the model pretraining time cutoff or internal/private knowledge base.
- Many methods for Open Domain Question Answering depend on first doing retrieval over a knowledge base and then incorporating the retrieved content as part of the prompt.
- Programming Language and External APIs
- Some models generate programming language statements to resolve natural language reasoning problems, offloading the solution step to a runtime such as a Python interpreter.
- Other models are augmented with text-to-text API calls, guiding the model to generate API call requests and append the returned result to the text sequence.
As AI hype is approaching fever pitch, “Prompt Engineering” has become another buzzword, with an insane amount of guides and tutorials cropping up on the internet. Unfortunately, a large portion of these resources offer little more than cookie-cutter strategies, contributing to a growing skepticism around the term itself.
It’s easy to dismiss it as just another fad, but doing so overlooks the genuine engineering behind effective communication with LLMs. This guide shows some strategies that really work and are based on sound principles instead of guesswork by AI-bros compiled into yet another useless infographic.
I hope it will be just as useful to you as it was for me.