首页 > 其他分享 >【每周一读】What is prompt-tuning?

【每周一读】What is prompt-tuning?

时间:2024-04-30 17:34:05浏览次数:26  
标签:What task prompt tuning prompts model data

原文链接:https://research.ibm.com/blog/what-is-ai-prompt-tuning

原文作者:Kim Martineau

(女作者好耶!)

本文来自IBM的一篇科普性质的博客。除了介绍手工设计的硬提示(hard prompt)、AI设计的由向量或数字组成的软提示(soft prompt)以及将软提示注入不同层的前缀微调(prefix-tuning),本文还介绍了prompt-tuning的几个新兴领域,非常有意思。

比如把多任务迁移学习的思想引入prompt设计,诶我们能不能设计一个普适的prompt,去学习不同任务之间共享的知识呢?啪,论文MPT出来了。

世界是不停变化的,新知识源源不断。任务一个接一个,该怎么在避免遗忘旧知识的情况下,去设计一个prompt连续学习新知识呢?啪,CODA-Prompt又出来了。不仅可以及时修正错误,还无需保留个人数据,这就是连续学习用于“即来即走”数据流的好处。

最后一个厉害了,通过prompt设计修正现实世界“不平等”数据引入的模型bias,IBM在2022NeurIPS发了两篇论文。第一篇FairIJ可以识别训练集中最具偏见的数据点,并通过附加到原始提示的prompt让模型将它们排除在外。第二个FairReprogram也是类似的方法,感兴趣的可以去搜搜原文。

prompt-tuning不仅降低了重新训练大模型的成本,还可以纠正模型的行为。缺点就是不可解释性啦,不过黑盒也是深度模型的通病了。

我做了一些原文摘录和高亮放在下面,读原文才是最地道的,这里就不翻译啦。

---------------------------------

Prompt-tuning originated with large language models but has since expanded to other foundation models, like transformers that handle other sequential data types, including audio and video. Prompts may be snippets of text, streams of speech, or blocks of pixels in a still image or video.

Hand-crafted prompts were quickly replaced by superior AI-designed prompts consisting of strings of numbers. In a paper the following year, Google researchers introduced so-called “soft” prompts designed by an AI that outperformed human-engineered “hard” prompts.

Around the same time, Stanford researchers introduced prefix-tuning, another automated prompt-design method that allows the model to learn one task after another. Prefix-tuning combines soft prompts with prompts injected into layers of the deep learning model for added flexibility. Though prompt-tuning is more efficient, both techniques let you freeze the model and skip expensive retraining.

Unlike hard prompts, AI-designed soft prompts are unrecognizable to the human eye. Each prompt consists of an embedding, or string of numbers, that distills knowledge from the larger model. High level or task specific, the prompt acts as a substitute for additional training data. Researchers recently estimated that a good language classifier prompt is worth hundreds to thousands of extra data points.

One drawback of prompt-tuning is its lack of interpretability. The AI discovers prompts optimized for a given task but can’t explain why it chose those embeddings. Like deep learning models themselves, soft prompts are opaque.

One area is multi-task learning. Foundation models often need to pivot quickly, from answering customer questions to identifying negative comments in online reviews. Rather than design a unique prompt for each task, researchers are discovering ways to create universal prompts that can be easily recycled.

“Think of it as applying multi-task transfer learning to prompts,” said Panda. “You learn a single prompt that consolidates task-shared knowledge so you can quickly adapt the model.”

In an upcoming paper at the International Conference on Learning Representations (ICLR), Panda and his colleagues show that their Multi-task Prompt Tuning (MPT) method outperformed other methods, and even did better than models fine-tuned on task-specific data.

Another up-and-coming area of research involves finding prompts on the fly as an AI model continually learns new tasks and concepts. Acquiring new knowledge involves updating the model on new data, but sometimes old knowledge gets overwritten in what’s known as catastrophic forgetting.

In a pre-print paper on arXiv, IBM researchers show that a technique called CODA-Prompt can discover prompts for consecutive, never-seen-before tasks, like classifying drawings, followed by paintings and photos without the model forgetting what it originally learned.

This type of flexible prompt for continual learning allows you to fix mistakes as they arise, without retaining the data and running afoul of privacy laws. “Mistakes might be observed in a chat session from user data,” said Leonid Karlinsky, an IBM researcher at the MIT-IBM Lab who co-developed the technique. “CODA-Prompt lets you correct the mistakes without holding on to that personal data.”

Finally, prompt-tuning also shows promise as a quick and low-cost tool to mitigate algorithmic bias. Because AI models are trained on real-world data, they inevitably absorb society’s biases, which can lead to decisions that perpetuate and exacerbate inequities in everything from healthcare to hiring. IBM researchers recently presented a pair of papers at the 2022 NeurIPS conference aimed at counteracting race and gender bias in large language and vision models using AI-designed prompts.

One of the researchers’ methods, called FairIJ, identifies the most biased data points in the model’s training set and has the model set them aside via prompts appended to the model’s original prompts. Tested on a salary-prediction task, a model tuned with FairIJ achieved more accurate, less biased results than several top bias-mitigation methods, the researchers found.

Prompt-tuning not only shrinks the cost of tailoring large models to new applications, said IBM's Cox, it can correct the model's behavior — in this case, mitigating bias.

标签:What,task,prompt,tuning,prompts,model,data
From: https://www.cnblogs.com/Aikoin/p/18168452

相关文章

  • [论文笔记] A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
    Introduction:一个好的prompt可以提高LLM的表现;prompt可以像软件开发一样被工程化;这篇论文的主要贡献在于提出了promptpatterns用于promptengineeringComparingsoftwarepatternswithpromptpatterns:这篇论文提出的用于构建prompt的framework可以帮助用户......
  • What does "xargs grep" do?
     https://askubuntu.com/questions/833128/what-does-xargs-grep-dohttps://superuser.com/questions/46199/how-to-combine-find-and-grep-for-a-complex-search-gnu-linux-find-grephttps://unix.stackexchange.com/questions/310987/how-to-use-find-and-grep-effective......
  • In Automotive Wiring, What is KL? 在汽车线路中,什么是KL
    当你在与汽车相关的活动中花费一些时间,而不仅仅是阅读Haynes手册时,迟早你需要了解有人提到12V汽车线路中的“KL31”是什么意思。“KL”是“klemme”的缩写,这是德语中的连接器/连接,或“Klemmenbezeichnungen”的术语。“Klemmenbezeichnungen”意味着端子标识。这主要编码在德国标......
  • GLM-3-Turbo 和Prompt
    GLM-3-Turbo和PromptGLM-3-TurboSSE调用接口请求 接口请求参数必填model:string,所要调用的模型编码message:list,调用语言模型时,将当前对话信息列表作为提示输入给模型,按照{"role":"user","content":"你好"}的json数组形式进行传参;可能的消息类型包括System......
  • Computer Basics 02 - What is a Computer?
     Whatisacomputer?Acomputerisanelectronicdevicethatmanipulatesinformation,ordata.Ithastheabilitytostore,retrieve,andprocessdata.Youmayalreadyknowthatyoucanuseacomputertotypedocuments,sendemail,playgames,andbrowse......
  • NL2SQL技术方案系列(1):NL2API、NL2SQL技术路径选择;LLM选型与Prompt工程技巧,揭秘项目落
    NL2SQL技术方案系列(1):NL2API、NL2SQL技术路径选择;LLM选型与Prompt工程技巧,揭秘项目落地优化之道NL2SQL基础系列(1):业界顶尖排行榜、权威测评数据集及LLM大模型(SpidervsBIRD)全面对比优劣分析[Text2SQL、Text2DSL]NL2SQL基础系列(2):主流大模型与微调方法精选集,Text2SQL经典算法......
  • NL2SQL实践系列(1):深入解析Prompt工程在text2sql中的应用技巧
    NL2SQL实践系列(1):深入解析Prompt工程在text2sql中的应用技巧NL2SQL基础系列(1):业界顶尖排行榜、权威测评数据集及LLM大模型(SpidervsBIRD)全面对比优劣分析[Text2SQL、Text2DSL]NL2SQL基础系列(2):主流大模型与微调方法精选集,Text2SQL经典算法技术回顾七年发展脉络梳理NL2SQL进......
  • KG2Instructions 和 KG2Prompts 将知识图谱转换为自然语言提示
     KG2Prompts是什么?KG2Prompts是一个用于将知识图谱转换为自然语言提示的工具。它使用预训练的语言模型来生成提示,这些提示可以用于各种任务,例如文本生成、问答和摘要。KG2Prompts的工作原理如下:首先,它将知识图谱转换为一个图结构,其中节点代表实体,边代表实体之间的关系。......
  • Uni-app的Prompt组件实现
    代码实现<!--prompt组件--><template> <view> <viewv-show="show"class="uni-mask":style="{top:offsetTop+'px'}"@touchmove.stop.prevent="maskMoveHandle"></view> <view......
  • what can i say?
    今天也是打了一场让我GG的考试首先来个炸裂的:全场唯一爆0的,堪称MVPwhatcanisay赛时一共交了三遍,就最后一遍GG了。分析一下原因吧:wa的码:#include<bits/stdc++.h>usingnamespacestd;typedeflonglongll;#definepspush_back#definemkmake_pair#definefi......