Prompt Engineering

Prompt engineering is an emerging discipline focused on developing optimized prompts to efficiently apply language models to various tasks. Prompt engineering helps researchers understand the abilities and limits of large language models (LLMs). By using various prompt engineering techniques, you can often get better answers from the foundation models without spending effort and cost on retraining or fine-tuning them.

Note that prompt engineering does not involve fine-tuning the model. In fine-tuning, the weights/parameters are adjusted using training data with the goal of optimizing a cost function. Model fine-tuning is generally an expensive process in terms of computation time and actual costs. Prompt engineering attempts to guide the trained foundation model (FM) to provide more relevant and accurate answers using various methods, such as, better worded questions, similar examples, intermediate steps, and logical reasoning.

Prompt Engineering leverages the principle of “priming”: providing the model with a context of few (3-5) examples of what the user expects the output to look like, so that the model mimics the previously “primed” behavior. By interacting with the LLM through a series of questions, statements, or instructions, users can effectively guide the LLM’s understanding and adjust its behavior according to the specific context of the conversation.

The key ideas are:

  • Prompt engineering is the fastest way to harness the power of large language models.
  • Prompt engineering optimizes how you work with and direct language models.
  • It boosts abilities, improves safety, and provides understanding.
  • Prompt engineering incorporates various skills for interfacing with and advancing language models.
  • Prompt engineering enables new features like augmenting domain knowledge with language models without changing model parameters or fine-tuning.
  • Prompt engineering provides methods for interacting with, building with, and grasping language models’ capabilities.
  • Higher quality prompt inputs lead to higher quality outputs.

each prompt contains:

  1. Instructions: A task for the model to do. (Task description or instruction on how the model should perform)
  2. Context: External information to guide the model.
  3. Input data: The input you want a response for.
  4. Output indicator: The output type or format.

https://www.youtube.com/watch?v=T9aRN5JkmL8

A very good guide :arrow_double_down:

https://catalog.us-east-1.prod.workshops.aws/workshops/0644c9e9-5b82-45f2-8835-3b5aa30b1848/en-US