首页 > 其他分享 >Best practices for prompt engineering with the OpenAI API

Best practices for prompt engineering with the OpenAI API

时间:2024-07-24 09:30:41浏览次数:16  
标签:prompt models text engineering practices OpenAI Text model

Best practices for prompt engineering with the OpenAI API

https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api

 

How prompt engineering works

Due to the way OpenAI models are trained, there are specific prompt formats that work particularly well and lead to more useful model outputs.

The official prompt engineering guide by OpenAI is usually the best place to start for prompting tips.

 

Below we present a number of prompt formats we find work well, but feel free to explore different formats, which may fit your task better.

 
Rules of Thumb and Examples

Note: the "{text input here}" is a placeholder for actual text/context

 
1. Use the latest model

For best results, we generally recommend using the latest, most capable models. Newer models tend to be easier to prompt engineer.

 
2. Put instructions at the beginning of the prompt and use ### or """ to separate the instruction and context

Less effective ❌:

Summarize the text below as a bullet point list of the most important points.

{text input here}

 

Better ✅:

Summarize the text below as a bullet point list of the most important points.

Text: """
{text input here}
"""

 

 
3. Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc

Be specific about the context, outcome, length, format, style, etc

 

Less effective ❌:

Write a poem about OpenAI.

 

Better ✅:

Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text to image ML model) in the style of a {famous poet}

 

 
4. Articulate the desired output format through examples

Less effective ❌:

Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes.

Text: {text}

Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably.

 

Better ✅:

Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themes

Desired format:
Company names: <comma_separated_list_of_company_names>
People names: -||-
Specific topics: -||-
General themes: -||-

Text: {text}

 

 
5. Start with zero-shot, then few-shot, neither of them worked, then fine-tune

✅ Zero-shot

Extract keywords from the below text.

Text: {text}

Keywords:

 

✅ Few-shot - provide a couple of examples

Extract keywords from the corresponding texts below.

Text 1: Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications.
Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applications
##
Text 2: OpenAI has trained cutting-edge language models that are very good at understanding and generating text. Our API provides access to these models and can be used to solve virtually any task that involves processing language.
Keywords 2: OpenAI, language models, text processing, API.
##
Text 3: {text}
Keywords 3:

 

✅Fine-tune: see fine-tune best practices here.

 

 
6. Reduce “fluffy” and imprecise descriptions

Less effective ❌:

The description for this product should be fairly short, a few sentences only, and not too much more.

 

Better ✅:

Use a 3 to 5 sentence paragraph to describe this product.

 

 
7. Instead of just saying what not to do, say what to do instead

Less effective ❌:

The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT.

Customer: I can’t log in to my account.
Agent:

 

Better ✅:

The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article www.samplewebsite.com/help/faq

Customer: I can’t log in to my account.
Agent:

 

 
8. Code Generation Specific - Use “leading words” to nudge the model toward a particular pattern

Less effective ❌:

# Write a simple python function that
# 1. Ask me for a number in mile
# 2. It converts miles to kilometers

 

In this code example below, adding “import” hints to the model that it should start writing in Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.)

 

Better ✅:

# Write a simple python function that
# 1. Ask me for a number in mile
# 2. It converts miles to kilometers
 
import

 

 
Parameters

Generally, we find that model and temperature are the most commonly used parameters to alter the model output.

    model - Higher performance models are generally more expensive and may have higher latency.

    temperature - A measure of how often the model outputs a less likely token. The higher the temperature, the more random (and usually creative) the output. This, however, is not the same as “truthfulness”. For most factual use cases such as data extraction, and truthful Q&A, the temperature of 0 is best.

    max_tokens (maximum length) - Does not control the length of the output, but a hard cutoff limit for token generation. Ideally you won’t hit this limit often, as your model will stop either when it thinks it’s finished, or when it hits a stop sequence you defined.

    stop (stop sequences) - A set of characters (tokens) that, when generated, will cause the text generation to stop.

For other parameter descriptions see the API reference.

 

 

标签:prompt,models,text,engineering,practices,OpenAI,Text,model
From: https://www.cnblogs.com/lightsong/p/18320101

相关文章

  • Prompt Enginnering(提示工程)
    什么是提示工程promptenginnering是提示工程的意思,也有叫指令工程。用白话讲:是我们对GPT说出的话,我们向它提问的信息,就是prompt。官方一点:是我们使用自然语言提示来控制和优化生成式模型(生成式模型例如:OpenAi的GPT-3,GPT-4o)输出的一项技术。对prompt进行优化,可以使我们从生......
  • CO2201 Software Engineering
    CO2201SoftwareEngineeringProjectAssignmentSpecification(30%)Dueon Friday26th July2024Objective  Theassignmentdevelopsthestudents’skillsinbasiccalculationsusedin   Financialservices.               ......
  • 阅读翻译Prompting Engineering Guides之Introduction(提示工程简介)
    阅读翻译PromptingEngineeringGuides之Introduction(提示工程简介)关于首次发表日期:2024-07-19PromptingEngineeringGuides官网:https://www.promptingguide.ai/zh使用ChatGPT和KIMI机翻,人工润色官网上已有翻译,但是不是最新的(有些段落没有),其中很小的一部分翻译有明显错误......
  • SQL Prompt安装不上(报错:1603)
     一开始一直跟踪服务看到是RedGateClient运行不起来(报错信息代码是这个1603),后面查询到官网:https://productsupport.red-gate.com/hc/en-us/articles/360015772598-Redgate-Client-Service-fails-to-start使用管理员运行CMD执行:netshhttpaddiplisten127.0.0.1 之后再......
  • 第二期 prompt 工程
     这节课会带给你认识什么是prompt掌握提示工程的核心方法论,比99%的人达成更好效果掌握提示调优的基本方法,了解它在实际生产中的应用掌握防止Prompt注入的方法,AI更安全开始上课!一、什么是提示工程(PromptEngineering)提示工程也叫「指令工程」。Prompt(提示词)是一个指......
  • Chain-of-Thought Prompting
    Chain-of-ThoughtPromptinghttps://www.promptingguide.ai/zh/techniques/cot#%E9%9B%B6%E6%A0%B7%E6%9C%AC-cot-%E6%8F%90%E7%A4%BA链式思考(CoT)提示图片来源:Wei等人(2022)在Wei等人(2022)中引入的链式思考(CoT)提示通过中间推理步骤实现了复杂的推理能力。您可以将其与少样本......
  • prompt第四讲-fewshot
    文章目录前提回顾FewShotPromptTemplateforamt格式化前提回顾前面已经实现了一个翻译助手了[prompt第三讲-PromptTemplate],prompt模板设计中,有说明、案例、和实际的问题#-*-coding:utf-8-*-"""@Time:2024/7/89:44@Auth:leon"""fromlangchain_core.pro......
  • prompt第三讲-PromptTemplate
    文章目录前提回顾PromptTemplateprompt模板定义以f-string渲染格式以mustache渲染格式以jinja2渲染格式直接实例化PromptTemplatePromptTemplate核心变量promptvalue生成invokeformat_prompt(不建议使用)format(不建议使用)batchstreamainvokePromptTemplate核心方......
  • text prompt如何超过77个词
    【深度学习】sdwebui的token_counter,update_token_counter,如何超出77个token的限制?对提示词加权的底层实现_prompt中token权重-CSDN博客文章浏览阅读1.6k次,点赞26次,收藏36次。文章探讨了如何在StableDiffusionProcessing中处理超过77个token的提示,涉及token_counter的实现、文......
  • GSOE9340 – Life Cycle Engineering
    GSOE9340– Life Cycle EngineeringAssignmentTwo-Term 2-2024Assignment weight: 30%Purpose: Thepurposeof thisassignmentistodevelopanunderstanding of the second step of operationalizing the framework of LCE,  which  is  ......