Published on

OpenAI 提示工程指南以及meta-prompt示例

提示工程指南 - 6个策略

1. Write clear instructions

这些模型无法读懂你的想法。如果输出过长,请要求简短的回复。如果输出过于简单,请要求专家级的写作。如果你不喜欢格式,请展示你想要的格式。模型对你的需求猜测得越少,你就越有可能得到你想要的结果。

1)Tactic: 在query中包含重要的细节或背景信息,来获取更相关的答案

否则,需要模型猜测具体意思。

prompt 示例1:

worse: Write code to calculate the Fibonacci sequence.

better: Write a TypeScript function to efficiently calculate the Fibonacci sequence. Comment the code liberally to explain what each piece does and why it's written that way.

prompt 示例2:

worse: Summarize the meeting notes.

better: Summarize the meeting notes in a single paragraph. Then write a markdown list of the speakers and each of their key points. Finally, list the next steps or action items suggested by the speakers, if any.

2)Tactic: 让模型扮演一个角色

系统消息可以用来指定模型在回复中使用的人物角色。

示例:

System 

When I ask for help to write something, you will reply with a document that contains at least one joke or playful comment in every paragraph.

3)Tactic: 使用分隔符清晰地标示输入的不同部分

分隔符可以包含比如triple quotation marks, XML tags, section titles, etc

对于简单的任务,使用分隔符可能不会对输出质量产生影响。然而,任务越复杂,明确任务细节就越重要。不要让模型费力去理解你到底在要求什么。

示例:

SYSTEM 

You will be provided with a pair of articles (delimited with XML tags) about the same topic. First summarize the arguments of each article. Then indicate which of them makes a better argument and explain why.

USER

<article> insert first article here </article>

<article> insert second article here </article>

4)Tactic: 明确完成一个任务需要的具体步骤

示例:

SYSTEM

Use the following step-by-step instructions to respond to user inputs.

Step 1 - The user will provide you with text in triple quotes. Summarize this text in one sentence with a prefix that says "Summary: ".

Step 2 - Translate the summary from Step 1 into Spanish, with a prefix that says "Translation: ".

USER

"""insert text here"""

5)Tactic: 提供一些示例

即少样本提示,但是样本数量其实不能过少(参考一些实践策略来说)。

6)Tactic: 要求模型生成具有特定目标长度的输出

目标输出长度可以用字数、句子数、段落数、项目符号点等来指定。然而,请注意,指示模型生成特定字数的精确度不高。模型更可靠地生成具有特定段落数或项目符号点数的输出。

eg:

  • Summarize the text delimited by triple quotes in about 50 words.
  • Summarize the text delimited by triple quotes in 2 paragraphs.
  • Summarize the text delimited by triple quotes in 3 bullet points.

2.提供参考文本

语言模型可能会自信地编造虚假答案,尤其是在被问及深奥话题或引用和网址时。就像一张笔记可以帮助学生在考试中表现更好一样,向这些模型提供参考文本可以帮助减少虚构内容的回答。

1)Tactic: 提示模型使用参考文本回答

为模型提供与当前查询相关的可信信息。

eg:

SYSTEM

Use the provided articles delimited by triple quotes to answer questions. If the answer cannot be found in the articles, write "I could not find an answer."


USER

<insert articles, each delimited by triple quotes>

Question: <insert question here>

所有模型都有有限的上下文窗口,因此需要某种方法来动态查找与所提问题相关的信息。可以使用Embedding来实现高效的知识检索。有关如何实现的更多详细信息,请参见Tactic: Use embeddings-based search to implement efficient knowledge retrieval

2)Tactic: 指示模型用参考文本中的引文来回答

如果输入已经补充了相关知识,可以直接请求模型通过引用提供的文档中的段落来为其答案添加引用。请注意,输出中的引用可以通过在提供的文档中进行字符串匹配来以编程方式验证。

SYSTEM

You will be provided with a document delimited by triple quotes and a question. Your task is to answer the question using only the provided document and to cite the passage(s) of the document used to answer the question. If the document does not contain the information needed to answer this question then simply write: "Insufficient information." If an answer to the question is provided, it must be annotated with a citation. Use the following format for to cite relevant passages ({"citation": …}).

USER

"""<insert document here>"""

Question: <insert question here>

3. 将复杂任务分解为简单的子任务

正如在软件工程中将复杂系统分解为一组模块化组件是良好实践一样,提交给语言模型的任务也是如此。复杂任务往往比简单任务具有更高的错误率。此外,复杂任务通常可以重新定义为简单任务的工作流程,其中早期任务的输出用于构建后续任务的输入。

1)Tactic: 使用意图分类来识别用户查询中最相关的指令

对于需要大量独立指令集来处理不同情况的任务,首先对查询类型进行分类并使用该分类来确定所需指令可能是有益的。这可以通过定义固定类别并硬编码与处理给定类别任务相关的指令来实现。此过程还可以递归应用,将任务分解为一系列阶段。这种方法的优点是每个查询只包含执行任务下一阶段所需的指令,这可以导致比使用单个查询执行整个任务更低的错误率。这也可以导致较低的成本,因为较大的提示运行成本更高。

eg:

SYSTEM

You will be provided with customer service queries. Classify each query into a primary category and a secondary category. Provide your output in json format with the keys: primary and secondary.

Primary categories: Billing, Technical Support, Account Management, or General Inquiry.

Billing secondary categories:
- Unsubscribe or upgrade
- Add a payment method
- Explanation for charge
- Dispute a charge

Technical Support secondary categories:
- Troubleshooting
- Device compatibility
- Software updates

Account Management secondary categories:
- Password reset
- Update personal information
- Close account
- Account security

General Inquiry secondary categories:
- Product information
- Pricing
- Feedback
- Speak to a human

USER

I need to get my internet working again.

根据客户查询的分类,可以向模型提供一组更具体的指令,以便它处理下一步。例如,假设客户需要“故障排除”方面的帮助。

SYSTEM


You will be provided with customer service inquiries that require troubleshooting in a technical support context. Help the user by:

- Ask them to check that all cables to/from the router are connected. Note that it is common for cables to come loose over time.
- If all cables are connected and the issue persists, ask them which router model they are using
- Now you will advise them how to restart their device:
-- If the model number is MTD-327J, advise them to push the red button and hold it for 5 seconds, then wait 5 minutes before testing the connection.
-- If the model number is MTD-327S, advise them to unplug and replug it, then wait 5 minutes before testing the connection.
- If the customer's issue persists after restarting the device and waiting 5 minutes, connect them to IT support by outputting {"IT support requested"}.
- If the user starts asking questions that are unrelated to this topic then confirm if they would like to end the current chat about troubleshooting and classify their request according to the following scheme:

<insert primary/secondary classification scheme from above here>


USER

I need to get my internet working again.

请注意,模型已被指示发出特殊字符串以指示对话状态的变化。这使我们能够将系统转变为一个状态机,其中状态决定了注入哪些指令。通过跟踪状态、在该状态下哪些指令是相关的,以及可选地从该状态允许哪些状态转换,我们可以在用户体验周围设置防护措施,这在使用不太结构化的方法时很难实现。

2)Tactic: 对于需要非常长对话的对话应用程序,总结或过滤之前的对话

由于模型具有固定的上下文长度,因此用户和助手之间的对话,其中整个对话都包含在上下文窗口中,无法无限期地继续。

对此问题有多种解决方法,其中之一是总结对话中的先前回合。一旦输入的大小达到预定的阈值长度,这可能会触发一个查询,总结部分对话,并将先前对话的摘要作为系统消息的一部分包含在内。或者,可以在整个对话过程中在后台异步总结先前的对话。

另一种解决方案是动态选择与当前查询最相关的对话的先前部分。请参阅策略“使用基于嵌入的搜索来实现高效的知识检索”。

3)Tactic: 分段总结长文档并递归构建完整总结

由于模型具有固定的上下文长度,因此它们不能用于在单个查询中总结比上下文长度减去生成摘要的长度更长的文本。

为了总结像书籍这样非常长的文档,我们可以使用一系列查询来总结文档的每个部分。部分摘要可以被连接并总结,从而生成摘要的摘要。这个过程可以递归进行,直到整个文档被总结。如果需要使用关于早期部分的信息来理解后续部分,那么一个有用的技巧是包括在书中任何给定点之前的文本的运行摘要,同时总结该点的内容。

OpenAI在之前的研究中使用GPT-3的变体研究了这种总结书籍的程序的有效性。

4. 给模型一些思考时间

如果被要求将17乘以28,你可能不会立即知道答案,但仍然可以通过时间来计算出来。同样,模型在试图立即回答时会犯更多的推理错误,而不是花时间来得出答案。要求在回答之前提供“思维链”可以帮助模型更可靠地推理出正确答案。

1)Tactic: 指示模型在匆忙得出结论之前先自行解决问题

有时,当我们明确指示模型在得出结论之前从基本原理进行推理时,我们会得到更好的结果。例如,假设我们希望模型评估学生对数学问题的解决方案。最明显的方法是简单地询问模型学生的解决方案是否正确。

eg:

SYSTEM

First work out your own solution to the problem. Then compare your solution to the student's solution and evaluate if the student's solution is correct or not. Don't decide if the student's solution is correct until you have done the problem yourself.

USER

Problem Statement: I'm building a solar power installation and I need help working out the financials.
- Land costs $100 / square foot
- I can buy solar panels for $250 / square foot
- I negotiated a contract for maintenance that will cost me a flat $100k per year, and an additional $10 / square foot
What is the total cost for the first year of operations as a function of the number of square feet.

Student's Solution: Let x be the size of the installation in square feet.
1. Land cost: 100x
2. Solar panel cost: 250x
3. Maintenance cost: 100,000 + 100x
Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000

ASSISTANT

Let x be the size of the installation in square feet.
1. Land cost: 100x
2. Solar panel cost: 250x
3. Maintenance cost: 100,000 + 10x
Total cost: 100x + 250x + 100,000 + 10x = 360x + 100,000

The student's solution is incorrect. They made an error in the maintenance cost calculation, using 100x instead of 10x. The correct total cost for the first year of operations is 360x + 100,000.

2)Tactic: 使用内心独白或一系列查询来隐藏模型的推理过程

之前的策略表明,有时让模型在回答具体问题之前详细推理问题是很重要的。对于某些应用程序,模型用来得出最终答案的推理过程不适合与用户分享。例如,在辅导应用程序中,我们可能希望鼓励学生自己解决问题,但模型对学生解决方案的推理过程可能会向学生透露答案。

内心独白是一种可以用来缓解这种情况的策略。内心独白的想法是指示模型将输出中意图隐藏给用户的部分放入一种结构化格式中,以便于解析。然后在向用户展示输出之前,对输出进行解析,并且只显示输出的一部分。

SYSTEM

Follow these steps to answer the user queries.

Step 1 - First work out your own solution to the problem. Don't rely on the student's solution since it may be incorrect. Enclose all your work for this step within triple quotes (""").

Step 2 - Compare your solution to the student's solution and evaluate if the student's solution is correct or not. Enclose all your work for this step within triple quotes (""").

Step 3 - If the student made a mistake, determine what hint you could give the student without giving away the answer. Enclose all your work for this step within triple quotes (""").

Step 4 - If the student made a mistake, provide the hint from the previous step to the student (outside of triple quotes). Instead of writing "Step 4 - ..." write "Hint:".

USER

Problem Statement: <insert problem statement>

Student Solution: <insert student solution>

或者,这可以通过一系列查询来实现,其中除了最后一个之外,所有的输出都对最终用户隐藏。

首先,我们可以要求模型自行解决问题。由于这个初始查询不需要学生的解决方案,因此可以省略。这提供了额外的优势,即模型的解决方案不会受到学生尝试解决方案的影响。

接下来,我们可以让模型使用所有可用信息来评估学生解答的正确性。

最后,我们可以让模型使用其自身的分析,以一个乐于助人的导师的身份构建回复。

几个步骤中的prompt示例如下:

USER
<insert problem statement>
SYSTEM
Compare your solution to the student's solution and evaluate if the student's solution is correct or not.
USER
Problem statement: """<insert problem statement>"""

Your solution: """<insert model generated solution>"""

Student’s solution: """<insert student's solution>"""
SYSTEM
You are a math tutor. If the student made an error, offer a hint to the student in a way that does not reveal the answer. If the student did not make an error, simply offer them an encouraging comment.
USER
Problem statement: """<insert problem statement>"""

Your solution: """<insert model generated solution>"""

Student’s solution: """<insert student's solution>"""

Analysis: """<insert model generated analysis from previous step>"""

3)Tactic: 询问模型在之前的处理过程中是否遗漏了什么

假设我们正在使用一个模型来列出与特定问题相关的来源摘录。在列出每个摘录后,模型需要确定是否应该开始写另一个摘录或停止。如果源文档很大,模型通常会过早停止,未能列出所有相关摘录。在这种情况下,通过提示模型进行后续查询以找到之前遗漏的摘录,通常可以获得更好的性能。

SYSTEM
You will be provided with a document delimited by triple quotes. Your task is to select excerpts which pertain to the following question: "What significant paradigm shifts have occurred in the history of artificial intelligence."

Ensure that excerpts contain all relevant context needed to interpret them - in other words don't extract small snippets that are missing important context. Provide output in JSON format as follows:

[{"excerpt": "..."},
...
{"excerpt": "..."}]
USER
"""<insert document here>"""
ASSISTANT
[{"excerpt": "the model writes an excerpt here"},
...
{"excerpt": "the model writes another excerpt here"}]
USER
Are there more relevant excerpts? Take care not to repeat excerpts. Also ensure that excerpts contain all relevant context needed to interpret them - in other words don't extract small snippets that are missing important context.

5. 使用额外的工具

通过将其他工具的输出输入模型来弥补模型的弱点。例如,文本检索系统(有时称为RAG或检索增强生成)可以告诉模型相关的文档。像OpenAI的代码解释器这样的代码执行引擎可以帮助模型进行数学运算和运行代码。如果某项任务可以通过工具而不是语言模型更可靠或更高效地完成,请将其卸载以获得两者的最佳效果。

1)Tactic: 使用基于嵌入的搜索来实现高效的知识检索

如果将外部信息源作为输入的一部分提供,模型可以利用这些信息源。这可以帮助模型生成更有见识和最新的响应。例如,如果用户询问关于某部特定电影的问题,可能有必要在模型的输入中添加关于该电影的高质量信息(例如演员、导演等)。嵌入可以用于实现高效的知识检索,以便在运行时动态地将相关信息添加到模型输入中。

文本嵌入是一个可以测量文本字符串之间相关性的向量。相似或相关的字符串会比不相关的字符串更接近。这个事实,加上快速向量搜索算法的存在,意味着嵌入可以用于实现高效的知识检索。特别是,一个文本语料库可以被分成多个块,每个块可以被嵌入并存储。然后可以对给定的查询进行嵌入,并执行向量搜索,以找到与查询最相关的嵌入文本块(即在嵌入空间中最接近的)。

2)Tactic: 使用代码执行来进行更准确的计算或调用外部 API

语言模型不能被依赖于独立准确地执行算术或长时间计算。在需要这种情况下,可以指示模型编写和运行代码,而不是进行自己的计算。特别是,可以指示模型将要运行的代码放入指定格式中,例如三重反引号。在生成输出后,可以提取并运行代码。最后,如果有必要,可以将代码执行引擎(即Python解释器)的输出作为输入提供给模型以进行下一次查询。

eg:

SYSTEM
You can write and execute Python code by enclosing it in triple backticks, e.g. ```code goes here```. Use this to perform calculations.

USER
Find all real-valued roots of the following polynomial: 3*x**5 - 5*x**4 - 3*x**3 - 7*x - 10.

代码执行的另一个良好用例是调用外部API。如果模型被正确指导如何使用API,它可以编写利用该API的代码。可以通过提供文档和/或代码示例来指导模型如何使用API。

SYSTEM
You can write and execute Python code by enclosing it in triple backticks. Also note that you have access to the following module to help users send messages to their friends:

```python
import message
message.write(to="John", message="Hey, want to meetup after work?")```

警告:执行由模型生成的代码本身并不安全,在任何试图这样做的应用程序中都应采取预防措施。特别是,需要一个沙盒化的代码执行环境来限制不受信任代码可能造成的危害。

3)Tactic: 让模型访问特定功能

Chat Completions API 允许在请求中传递函数描述列表。这使得模型能够根据提供的模式生成函数参数。生成的函数参数由 API 以 JSON 格式返回,并可用于执行函数调用。函数调用提供的输出可以在后续请求中反馈回模型以闭合循环。这是使用 OpenAI 模型调用外部函数的推荐方法。

6. 系统地测试更改

有时候,很难判断一个变化——例如,一个新的指令或一个新的设计——是否让你的系统变得更好或更糟。查看一些示例可能会暗示哪个更好,但由于样本量小,很难区分是真正的改进还是随机的运气。也许这个变化在某些输入上提高了性能,但在其他输入上却降低了性能。

评估程序(或称为“评估”)对于优化系统设计非常有用。好的评估具有以下特点:

  • 代表真实世界的使用情况(或至少是多样化的)
  • 包含许多测试案例以获得更大的统计效力(请参见下表中的指南)
  • 易于自动化或重复

输出的评估可以由计算机、人类或两者的组合来完成。计算机可以通过客观标准(例如,只有一个正确答案的问题)以及一些主观或模糊标准来自动化评估,其中模型输出由其他模型查询进行评估。OpenAI Evals 是一个开源软件框架,提供用于创建自动化评估的工具。

当存在一系列可能的输出被认为质量同样高时(例如,对于有长答案的问题),基于模型的评估可能是有用的。基于模型的评估可以现实地评估的内容与需要人类评估的内容之间的界限是模糊的,并且随着模型变得更强大而不断变化。我们鼓励进行实验,以找出基于模型的评估在您的用例中能多好地工作。

1)Tactic: 通过参考黄金标准答案来评估模型输出

假设已知一个问题的正确答案应该参考一组特定的已知事实。那么我们可以使用模型查询来计算答案中包含了多少个所需的事实。

SYSTEM
You will be provided with text delimited by triple quotes that is supposed to be the answer to a question. Check if the following pieces of information are directly contained in the answer:

- Neil Armstrong was the first person to walk on the moon.
- The date Neil Armstrong first walked on the moon was July 21, 1969.

For each of these points perform the following steps:

1 - Restate the point.
2 - Provide a citation from the answer which is closest to this point.
3 - Consider if someone reading the citation who doesn't know the topic could directly infer the point. Explain why or why not before making up your mind.
4 - Write "yes" if the answer to 3 was yes, otherwise write "no".

Finally, provide a count of how many "yes" answers there are. Provide this count as {"count": <insert count here>}.

OpenAI meta-prompt

Openai 的Generate 按钮使用两种主要方法:

  • Prompts:使用包含最佳实践的元提示来生成或改进提示。
  • Schemas:使用元模式来生成有效的JSON和函数语法。

这是目前的技术,后续也许会修改为使用 DspyGradient Descent

meta-prompt指示模型根据您的任务描述创建一个好的提示或改进现有的提示。Playground中的元提示借鉴了我们的提示工程最佳实践和与用户的实际经验。

使用特定的元提示来确保生成的提示符合预期格式,例如音频等不同的输出类型。

  • 文本输出的meta-prompt
from openai import OpenAI

client = OpenAI()

META_PROMPT = """
Given a task description or existing prompt, produce a detailed system prompt to guide a language model in completing the task effectively.

# Guidelines

- Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output.
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure.
- Reasoning Before Conclusions**: Encourage reasoning steps before any conclusions are reached. ATTENTION! If the user provides examples where the reasoning happens afterward, REVERSE the order! NEVER START EXAMPLES WITH CONCLUSIONS!
    - Reasoning Order: Call out reasoning portions of the prompt and conclusion parts (specific fields by name). For each, determine the ORDER in which this is done, and whether it needs to be reversed.
    - Conclusion, classifications, or results should ALWAYS appear last.
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements.
   - What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from placeholders.
- Clarity and Conciseness: Use clear, specific language. Avoid unnecessary instructions or bland statements.
- Formatting: Use markdown features for readability. DO NOT USE ``` CODE BLOCKS UNLESS SPECIFICALLY REQUESTED.
- Preserve User Content: If the input task or prompt includes extensive guidelines or examples, preserve them entirely, or as closely as possible. If they are vague, consider breaking down into sub-steps. Keep any details, guidelines, examples, variables, or placeholders provided by the user.
- Constants: DO include constants in the prompt, as they are not susceptible to prompt injection. Such as guides, rubrics, and examples.
- Output Format: Explicitly the most appropriate output format, in detail. This should include length and syntax (e.g. short sentence, paragraph, JSON, etc.)
    - For tasks outputting well-defined or structured data (classification, JSON, etc.) bias toward outputting a JSON.
    - JSON should never be wrapped in code blocks (```) unless explicitly requested.

The final prompt you output should adhere to the following structure below. Do not include any additional commentary, only output the completed system prompt. SPECIFICALLY, do not include any additional messages at the start or end of the prompt. (e.g. no "---")

[Concise instruction describing the task - this should be the first line in the prompt, no section header]

[Additional details as needed.]

[Optional sections with headings or bullet points for detailed steps.]

# Steps [optional]

[optional: a detailed breakdown of the steps necessary to accomplish the task]

# Output Format

[Specifically call out how the output should be formatted, be it response length, structure e.g. JSON, markdown, etc]

# Examples [optional]

[Optional: 1-3 well-defined examples with placeholders if necessary. Clearly mark where examples start and end, and what the input and output are. User placeholders as necessary.]
[If the examples are shorter than what a realistic example is expected to be, make a reference with () explaining how real examples should be longer / shorter / different. AND USE PLACEHOLDERS! ]

# Notes [optional]

[optional: edge cases, details, and an area to call or repeat out specific important considerations]
""".strip()

def generate_prompt(task_or_prompt: str):
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": META_PROMPT,
            },
            {
                "role": "user",
                "content": "Task, Goal, or Current Prompt:\n" + task_or_prompt,
            },
        ],
    )

    return completion.choices[0].message.content

音频输出的meta-prompt

from openai import OpenAI

client = OpenAI()

META_PROMPT = """
Given a task description or existing prompt, produce a detailed system prompt to guide a realtime audio output language model in completing the task effectively.

# Guidelines

- Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output.
- Tone: Make sure to specifically call out the tone. By default it should be emotive and friendly, and speak quickly to avoid keeping the user just waiting.
- Audio Output Constraints: Because the model is outputting audio, the responses should be short and conversational.
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure.
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements.
   - What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from placeholders.
  - It is very important that any examples included reflect the short, conversational output responses of the model.
Keep the sentences very short by default. Instead of 3 sentences in a row by the assistant, it should be split up with a back and forth with the user instead.
  - By default each sentence should be a few words only (5-20ish words). However, if the user specifically asks for "short" responses, then the examples should truly have 1-10 word responses max.
  - Make sure the examples are multi-turn (at least 4 back-forth-back-forth per example), not just one questions an response. They should reflect an organic conversation.
- Clarity and Conciseness: Use clear, specific language. Avoid unnecessary instructions or bland statements.
- Preserve User Content: If the input task or prompt includes extensive guidelines or examples, preserve them entirely, or as closely as possible. If they are vague, consider breaking down into sub-steps. Keep any details, guidelines, examples, variables, or placeholders provided by the user.
- Constants: DO include constants in the prompt, as they are not susceptible to prompt injection. Such as guides, rubrics, and examples.

The final prompt you output should adhere to the following structure below. Do not include any additional commentary, only output the completed system prompt. SPECIFICALLY, do not include any additional messages at the start or end of the prompt. (e.g. no "---")

[Concise instruction describing the task - this should be the first line in the prompt, no section header]

[Additional details as needed.]

[Optional sections with headings or bullet points for detailed steps.]

# Examples [optional]

[Optional: 1-3 well-defined examples with placeholders if necessary. Clearly mark where examples start and end, and what the input and output are. User placeholders as necessary.]
[If the examples are shorter than what a realistic example is expected to be, make a reference with () explaining how real examples should be longer / shorter / different. AND USE PLACEHOLDERS! ]

# Notes [optional]

[optional: edge cases, details, and an area to call or repeat out specific important considerations]
""".strip()

def generate_prompt(task_or_prompt: str):
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": META_PROMPT,
            },
            {
                "role": "user",
                "content": "Task, Goal, or Current Prompt:\n" + task_or_prompt,
            },
        ],
    )

    return completion.choices[0].message.content

Reference