首页 > 其他分享 >论文拆解:GPT-RE

论文拆解:GPT-RE

时间:2024-08-24 09:15:04浏览次数:3  
标签:ICL LLMs al RE 拆解 GPT et

论文信息:

Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Jiwei Li, Sadao Kurohashi:
GPT-RE: In-context Learning for Relation Extraction using Large Language Models. EMNLP 2023: 3534-3547

引言

第1段:研究背景:GPT-3、ICL

% NLP前沿:GPT-3
The emergence of large language models (LLMs) such as GPT-3 (Brown et al., 2020; Thoppilan et al., 2022; Chowdhery et al., 2022; Rae et al., 2021; Hoffmann et al., 2022) represents a significant advancement in natural language processing (NLP).

% from 微调 to ICL
Instead of following a pretraining-and-finetuning pipeline (Devlin et al., 2019; Beltagy et al., 2019; Raffel et al., 2019; Lan et al., 2019; Zhuang et al., 2021), which finetunes a pre-trained model on a task-specific dataset in a fully-supervised manner, LLMs employ a new paradigm known as in-context learning (ICL) (Brown et al., 2020; Min et al., 2022a) which formulates an NLP task under the paradigm of language generation and makes predictions by learning from a few demonstrations.

% ICL v.s 微调
Under the framework of ICL, LLMs achieve remarkable performance rivaling previous fully-supervised methods even with only a limited number of demonstrations provided in various tasks such as solving math problems, commonsense reasoning, text classification, fact retrieval, natural language inference, and semantic parsing (Brown et al., 2020; Min et al., 2022b; Zhao et al., 2021; Liu et al., 2022b; Shin et al., 2021).

第2段:前人工作

% 前人工作:ICL
Despite the overall promising performance of LLMs, the utilization of ICL for relation extraction (RE) is still suboptimal.

% 背景介绍:RE
RE is the central task for knowledge retrieval requiring a deep understanding of natural language, which seeks to identify a predefined relation between a specific entity pair mentioned in the input sentence or NULL if no relation is found.

% 背景介绍:RE + ICL
Given a test input, ICL for RE prompts the input of LLMs with the task instruction, a few demonstrations retrieved from the training data, and the test input itself.
Then LLMs generate the corresponding relation.

% 前人工作:RE + ICL
Recent research (Gutiérrez et al., 2022) has sought to apply GPT-3 ICL to biomedical RE, but the results are relatively negative and suggest that GPT-3 ICL still significantly underperforms fine-tuned models.

第3.1段:前人工作的不足

% 概述:不足
The reasons that cause the pitfall of GPT-3 ICL in RE are two folds:

% 不足1:实体和关系的低相关性
(1) The low relevance regarding entity and relation in the retrieved demonstrations for ICL.

% 不足1:实体和关系的低相关性:仅考虑句向量
Demonstrations are selected randomly or via k-nearest neighbor (kNN) search based on sentence embedding (Liu et al., 2022b; Gutiérrez et al., 2022).

% 不足1:实体和关系的低相关性:仅考虑句向量:未考虑实体和关系
Regrettably, kNN-retrieval based on sentence embedding is more concerned with the relevance of the overall sentence semantics and not as much with the specific entities and relations it contains, which leads to low-quality demonstrations.

% 不足1:实体和关系的低相关性:仅考虑句向量:未考虑实体和关系:举例说明
As shown in Figure 2, the test input retrieves a semantically similar sentence but is not desired in terms of entities and relations.

第3.2段:前人工作的不足

% 不足2:缺少“输入-标签”映射
(2) The lack of explaining input-label mappings in demonstrations leads to poor ICL effectiveness: A vanilla form of ICL lists all demonstrations as input-label pairs without any explanations.

% 不足2:缺少“输入-标签”映射:LLMs仅从表面线索学习
This may mislead LLMs to learn shallow clues from surface words, while a relation can be presented in diverse forms due to language complexity.

% 不足2:缺少“输入-标签”映射:LLMs仅从表面线索学习:提高每个示例的质量
Especially when ICL has a maximal input length, optimizing the learning efficiency of each single demonstration becomes extremely important.

第4.1段:本文工作

% 动机
To this end, we propose GPT-RE for the RE task.

% 概述:检索 + 推理
GPT-RE employs two strategies to resolve the issues above: (1) task-aware retrieval and (2) gold label-induced reasoning.

% 方法1:任务感知检索:概述
For (1) task-aware retrieval, its core is to use representations that deliberately encode and emphasize entity and relation information rather than sentence embedding for kNN search.

% 方法1:任务感知检索:具体
We achieve this by two different retrieval approaches: (a) entity-prompted sentence embedding; (b) fine-tuned relation representation, which naturally places emphasis on entities and relations.

% 方法1:任务感知检索:优势
Both methods contain more RE-specific information than sentence semantics, thus effectively addressing the problem of low relevance.

第4.2段:本文工作

% 方法2:“input-label”推理:概述
For (2) gold label-induced reasoning, we propose to inject the reasoning logic into the demonstration to provide more evidence to align an input and the label, a strategy akin to the Chain-ofThought (CoT) research (Wei et al., 2022; Wang et al., 2022b; Kojima et al., 2022).

% 方法2:“input-label”推理:具体、区别
But different from previous work, we allow LLMs to elicit the reasoning process to explain not only why a given sentence should be classified under a particular label but also why a NULL example should not be assigned to any of the pre-defined categories.

% 方法2:“input-label”推理:优势
This process significantly improves the ability of LLMs to align the relations with diverse expression forms.

第5.1段:实验效果

% 提出问题:关系幻觉
Recent work reveals another crucial problem named “overpredicting” as shown in Figure 3: we observe that LLMs have the strong inclination to wrongly classify NULL examples into other predefined labels.

% 关系幻觉:相关工作
A similar phenomenon has also been observed in other tasks such as NER (Gutiérrez et al., 2022; Blevins et al., 2022).

% 本文方法:实验效果
In this paper, we show that this issue can be alleviated if the representations for retrieval can be supervised with the whole set of NULL in the training data.

第5.2段:实验效果

% 实验设置:RE
We evaluate our proposed method on three popular general domain RE datasets: Semeval 2010 task 8, TACRED and ACE05, and one scientific domain dataset SciERC.

% 实验效果:概述:超越:GPT-3基线模型+传统微调模型
We observe that GPT-RE achieves improvements over not only existing GPT-3 baselines, but also fully-supervised baselines.

% 实验效果:具体:取得SOTA + 有竞争力结果
Specifically, GPT-RE achieves SOTA performances on the Semeval and SciERC datasets, and competitive performances on the TACRED and ACE05 datasets.

标签:ICL,LLMs,al,RE,拆解,GPT,et
From: https://www.cnblogs.com/fengyubo/p/18377377

相关文章

  • 004.MinIO-DirectPV分布式存储部署
    MinIO部署介绍部署概述Kuberneteshostpath、local和本地静态配置都存在需要事先在node节点准备好可用的块存储或文件系统,例如对插入的硬盘,或者磁盘阵列做分区格式化,文件系统则需提前创建好Kubernetes即将利用的挂载目录,并且两种方法都会有亲和性限制,无法做到让Kubernetes自身的......
  • DaVinci Resolve Studio 19.0 正式版 (macOS, Windows) - 剪辑、调色、特效和音频后期
    DaVinciResolveStudio19.0正式版(macOS,Windows)-剪辑、调色、特效和音频后期制作BlackmagicDesignDaVinciResolveStudio请访问原文链接:https://sysin.org/blog/davinci-resolve/,查看最新版。原创作品,转载请保留出处。作者主页:sysin.orgDaVinciResolve19免费!......
  • 报表系统之Redash
    Redash是一个开源的数据可视化和仪表板工具,旨在帮助用户轻松地从多个数据源中提取、查询、可视化数据,并分享结果。它的设计目标是让数据分析变得更加便捷,即使是非技术用户也能通过简单的操作生成复杂的数据报告和仪表板。核心概念和功能查询编辑器:Redash提供了一个功能强......
  • 用FinalShell远程登录VMware的Linux操作系统登陆不上,反复弹出要求输入密码的界面
    问题描述:用FinalShell远程登录VMware的Linux操作系统,其中IP地址输入正确,虚拟机和自己的电脑可以互相ping通,但是就是连接不上,反复弹出要求输入密码的界面:那么可能就是Linux登录账户的用户名和home目录下的用户名不一致导致的,解决办法如下:解决办法:1.去Linux操作系统中的终端......
  • 重生之我要当前端大王--node篇--02express路由,中间件
    重生之我要当前端大王–node篇第一篇章后端服务篇–nodeJS启动!02express路由,中间件前言阅读本章可学习到将接口抽离到独立模块,减少耦合,以及中间件的使用一、路由是什么,有什么用?路由是Express应用中用于处理客户端请求的规则和处理程序。每个路由可以定义一个特定......
  • Android SDK is missing required platform api
    如果这个时候选择UpdateAndroidSDK以升级AndroidSDK的话,可能会弹出一个终端窗口,然后自动关闭后又重复弹该窗口,重复数次,最后还是不能成功升级AndroidSDK。线上搜的教程大部分都是叫下载AndroidStudio,可是笔者只想用Unity进行做开发。解决方法导致Unity无法升级A......
  • Twenty Lectures on Algorithmic Game Theory 算法博弈论二十讲 Lecture 5 Revenue-Ma
    TwentyLecturesonAlgorithmicGameTheory算法博弈论二十讲Lecture5Revenue-MaximizingAuctions(上)Lecture5Revenue-MaximizingAuctions第2至第4讲聚焦于设计能够最大化社会福利的机制,无论是精确还是近似。这类机制的收益产生仅仅是副作用,是激励代理人如实......
  • C动态内存分配和管理函数malloc,calloc,free与realloc
    目录 介绍1.void*malloc(size_tsize);2.void*calloc(size_tnum,size_tsize);3.void*realloc(void*ptr,size_tsize);4.voidfree(void*ptr);5.代码演示 介绍在C语言中,malloc、calloc、realloc 和 free 是用于动态内存分配和管理的标准库函数。它们......
  • 表达式用法,ref定义响应式,v-bind指令和图片轮播结合,class和style内联样式绑定,事件监听
    表达式用法当前时间,随机数,返回值,判断取值ref响应式使用ref赋值和普通赋值v-bind指令和图片轮播结合(v-bind可以省略成":")class和style内联样式绑定数据绑定一个常见需求是操作元素的class列表和它的内联样式两个class会用到这两个的样式,用v-bind对class里面的......
  • 粘包现象 | wireshark抓包的使用
    在TCP协议的通信过程中,由于其面向流的特性,数据在传输过程中可能会发生粘包现象,即多个发送的数据包被接收方一次性接收,导致应用层无法正确解析数据。1.粘包现象概述TCP协议为了保证传输效率,可能会将多次send调用发送的数据合并在一个TCP报文中发送出去。这样,接收方在读取时就......