原文链接:https://research.ibm.com/blog/what-is-ai-prompt-tuning
原文作者:Kim Martineau
(女作者好耶!)
本文来自IBM的一篇科普性质的博客。除了介绍手工设计的硬提示(hard prompt)、AI设计的由向量或数字组成的软提示(soft prompt)以及将软提示注入不同层的前缀微调(prefix-tuning),本文还介绍了prompt-tuning的几个新兴领域,非常有意思。
比如把多任务迁移学习的思想引入prompt设计,诶我们能不能设计一个普适的prompt,去学习不同任务之间共享的知识呢?啪,论文MPT出来了。
世界是不停变化的,新知识源源不断。任务一个接一个,该怎么在避免遗忘旧知识的情况下,去设计一个prompt连续学习新知识呢?啪,CODA-Prompt又出来了。不仅可以及时修正错误,还无需保留个人数据,这就是连续学习用于“即来即走”数据流的好处。
最后一个厉害了,通过prompt设计修正现实世界“不平等”数据引入的模型bias,IBM在2022NeurIPS发了两篇论文。第一篇FairIJ可以识别训练集中最具偏见的数据点,并通过附加到原始提示的prompt让模型将它们排除在外。第二个FairReprogram也是类似的方法,感兴趣的可以去搜搜原文。
prompt-tuning不仅降低了重新训练大模型的成本,还可以纠正模型的行为。缺点就是不可解释性啦,不过黑盒也是深度模型的通病了。
我做了一些原文摘录和高亮放在下面,读原文才是最地道的,这里就不翻译啦。
---------------------------------
Prompt-tuning originated with large language models but has since expanded to other foundation models, like transformers that handle other sequential data types, including audio and video. Prompts may be snippets of text, streams of speech, or blocks of pixels in a still image or video.
Hand-crafted prompts were quickly replaced by superior AI-designed prompts consisting of strings of numbers. In a paper the following year, Google researchers introduced so-called “soft” prompts designed by an AI that outperformed human-engineered “hard” prompts.
Around the same time, Stanford researchers introduced prefix-tuning, another automated prompt-design method that allows the model to learn one task after another. Prefix-tuning combines soft prompts with prompts injected into layers of the deep learning model for added flexibility. Though prompt-tuning is more efficient, both techniques let you freeze the model and skip expensive retraining.
Unlike hard prompts, AI-designed soft prompts are unrecognizable to the human eye. Each prompt consists of an embedding, or string of numbers, that distills knowledge from the larger model. High level or task specific, the prompt acts as a substitute for additional training data. Researchers recently estimated that a good language classifier prompt is worth hundreds to thousands of extra data points.
One drawback of prompt-tuning is its lack of interpretability. The AI discovers prompts optimized for a given task but can’t explain why it chose those embeddings. Like deep learning models themselves, soft prompts are opaque.
One area is multi-task learning. Foundation models often need to pivot quickly, from answering customer questions to identifying negative comments in online reviews. Rather than design a unique prompt for each task, researchers are discovering ways to create universal prompts that can be easily recycled.
“Think of it as applying multi-task transfer learning to prompts,” said Panda. “You learn a single prompt that consolidates task-shared knowledge so you can quickly adapt the model.”
In an upcoming paper at the International Conference on Learning Representations (ICLR), Panda and his colleagues show that their Multi-task Prompt Tuning (MPT) method outperformed other methods, and even did better than models fine-tuned on task-specific data.
Another up-and-coming area of research involves finding prompts on the fly as an AI model continually learns new tasks and concepts. Acquiring new knowledge involves updating the model on new data, but sometimes old knowledge gets overwritten in what’s known as catastrophic forgetting.
In a pre-print paper on arXiv, IBM researchers show that a technique called CODA-Prompt can discover prompts for consecutive, never-seen-before tasks, like classifying drawings, followed by paintings and photos without the model forgetting what it originally learned.
This type of flexible prompt for continual learning allows you to fix mistakes as they arise, without retaining the data and running afoul of privacy laws. “Mistakes might be observed in a chat session from user data,” said Leonid Karlinsky, an IBM researcher at the MIT-IBM Lab who co-developed the technique. “CODA-Prompt lets you correct the mistakes without holding on to that personal data.”
Finally, prompt-tuning also shows promise as a quick and low-cost tool to mitigate algorithmic bias. Because AI models are trained on real-world data, they inevitably absorb society’s biases, which can lead to decisions that perpetuate and exacerbate inequities in everything from healthcare to hiring. IBM researchers recently presented a pair of papers at the 2022 NeurIPS conference aimed at counteracting race and gender bias in large language and vision models using AI-designed prompts.
One of the researchers’ methods, called FairIJ, identifies the most biased data points in the model’s training set and has the model set them aside via prompts appended to the model’s original prompts. Tested on a salary-prediction task, a model tuned with FairIJ achieved more accurate, less biased results than several top bias-mitigation methods, the researchers found.
Prompt-tuning not only shrinks the cost of tailoring large models to new applications, said IBM's Cox, it can correct the model's behavior — in this case, mitigating bias.
标签:What,task,prompt,tuning,prompts,model,data From: https://www.cnblogs.com/Aikoin/p/18168452