首页 > 其他分享 >论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader

时间:2022-12-21 14:05:52浏览次数:57  
标签:知识库 Knowledge 语义 over 实体 Answering 问句 信息 Reader


论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader

  知识库问答(KBQA)是一种领域问答系统(Domain QA),基本原理是给定一个问句和一个知识库,从知识库中找到对应的答案实体。考虑到知识库是不充分的,该工作则结合非结构化文本来解决一些问句无法直接在知识库中寻找答案的问题。

一、简要信息

序号

属性


1

模型名称

SGR EADER + KAR EADER

2

所属领域

自然语言处理

3

研究内容

知识库问答

4

核心内容

Relation Detection, KBQA

5

GitHub源码

​Knowledge-Aware-Reader​

6

论文PDF

​https://arxiv.org/pdf/1905.07098​

二、全文摘要翻译

  我们提出一种新的端到端的问答系统模型,它可以将不充分的知识库和从非结构化文本检索的结果结合起来,基于一种假设:知识库很容易进行查询且查询的结果可以有助于对文本的理解,我们的模型首先从与问题相关的知识库子图中积累实体的知识,然后在潜空间中重新表述问题,用手头积累的实体知识阅读文本,最终从知识库和文本中获得预测相似度的证据信息。我们的方法同时在简单问答和复杂问答上得以提升。

三、相关工作

  先前工作均是直接从知识库中提取答案,但有许多答案是在知识库中找不到的,例如:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text


当询问“Who did Cam Newton sign with”时,知识库中虽然有相关链接的子图,但并没有准确的答案,因此此时需要结合非结构化的文本来寻找答案。因此这也是本文的动机。

  目前有相关工作在做基于Text的开放领域问答,作者试图再此基础上与知识库问答进行结合,来缓解知识库的不充分到来的问题。作者提出的端到端模型主要包括两个部分:

(1)给定一个问句和对应的一些中心实体,则可以对知识库中的一个子图,由这个子图可以获得相关联的实体语义信息,将这些实体语义信息与问句语义进行结合。这一部分叫做SubGraph Reader
(2)通过设计一种条件门机制,来动态的选择读取多少KB信息来结合问句和非结构化文本的语义,这部分则是将KB与文本相结合,称为Knowledge-Aware Text Reader

四、方法

4.1 任务定义

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_02,作者通过直接使用PageRank来获得该结果。另外作者使用一种现有的文档检索(《Reading wikipedia to answer open-domain questions》)从维基百科中挖掘相关的内容,同时保证文档中对应的实体与知识库中的实体相互对齐。因此本文的任务目标则是给定一个问句和对应的中心实体,从知识库和问答中检索答案。
  我们的模型包括两个组件——SubGraph Reader和Knowledge-aware Text Reader。

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_03


简单的描述上图:给定一个问句,首先获得其中心实体,根据该中心实体的邻接子图进行表征,获得当前中心实体的知识信息。其次根据知识库信息与问句进行融合,到非结构化文本中通过门控单元选择性提取语义,获得文档表示,最后通过知识库和文档来寻找答案。

4.2 SubGraph Reader

  在这一部分,为了能够获得相应的KB信息,我们使用图注意力机制(Graph Attention)试图从与该实体相邻的其他实体获得知识。因此最终可以得到每个实体的表征向量。

(1)首先使用共享参数的LSTM对问句 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_04 和某一关系 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_05 进行表征,分别得到对应的隐向量 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_06论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_07。然后使用一层Dot-product注意力对 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_08

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_09

其中论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_10论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_11

(2)相比于过去的方法(每个relation单独与question进行匹配),作者希望能够在更多细粒度层面上进行匹配(Question-Relation Matching)。因此首先根据上面计算的 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_11 对问句中每个token进行注意力 :论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_13,其次再对问句上每个token进行加权求和:论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_14,最后进行匹配计算:论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_15。因此这里得到的论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_16便是问句与当前关系的相似度。

(3)我们定义一种Binary Indicator:论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_17,其表示的是一个符号函数,如果对于一个实体论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_18(其属于问句中心实体的邻接实体),如果论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_18也是属于问句的中心实体,则其值为1,否则为0。更为简单的说,如果一个问句的中心实体的邻接实体也是中心实体,则这两个实体组成的三元组 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_20 因此依赖于这样的特点,我们定义一个新的注意力:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_21

这个注意力则主要取决于这个符号函数 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_17 以及邻接的边(关系)与问句的相似度,这里可以充分表明的是,设计的注意力不仅仅取决于关系与问句的匹配程度,也取决于邻接实体与中心实体的相关度,这里相关度自然是1或0。

(简单的绘图如下图红色所示,看不懂也可跳过)

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_23

(4)这里便是核心部分,上面三个部分的目标是获得问句和关系的表征向量,以及问句、邻接实体与问句的注意力,本节则是如何利用这些信息。作者定义如下:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_24

其中论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_25 是对中心实体预训练的知识表示向量,论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_26是一种门控机制,因此其是用来决定从中心实体中获得多少信息。上式的后半部分则是对当前中心实体论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_27的所有邻接的元组 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_28 进行加权求和,论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_29表示的是中心实体论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_27的所有邻接实体及边。这部分与Graph Attention是一样的。

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_31 ,它包含的便是当前的中心实体以及邻接子图的相关知识,其代表的则是知识库的语义信息。

4.3 Knowledge-Aware Text Reader

  除了对知识库进行知识抽取外,作者还希望从非结构化的文本中抽取语义信息。这一部分则是参考阅读理解相关方法。

(1)首先根据上面的 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_32 ,先使用一层注意力 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_33,即对问句的每个token进行加权求和,得到一个句子向量,这个向量只代表句子层面上的语义。因为每个问句都有若干个中心实体,因此根据 SubGraph Reader 可以得到若干个 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_31 ,我们对它们进行求期望:论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_35,此时得到的便是所有中心实体的知识库信息的平均值。为了将 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_36论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_37

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_38

前者表示通过门控机制选择性的挑选句子层面上的语义信息,后面部分则是选择性的挑选与知识库相关联的信息,可以返现tanh括号内的部分是问句与知识库的融合信息。最后得到的 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_39

(2)此部分为Knowledge-Aware Text Reader的核心,作者提出一种 condition gating mechanism,其目的是动态的从passage中提取问句所需要的信息。假设一个passage的每个token为 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_40 ,应用一个BiLSTM进行表征得到 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_41,如果当前的token 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_40 是中心实体(即与问句中的实体相对齐),则还可以获得 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_43

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_44

其中 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_45 是一个门控单元。这个定义所表达的含义是有选择性的将知识库中的实体与文本中的实体相结合,而门控单元则取决于问句与二者的相似度。

(3)我们应用一次BiLSTM,输入 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_46 得到 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_47,并计算attention: 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_48,其表示的是融合知识库的问句语义与passage中每个token信息的匹配程度。然后对每个token进行加权求和得到这个passage文档的向量:论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_49。如果中心实体对应多篇文档,则将这些文档的向量取平均:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_子图_50

其中 论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_51

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_31论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_53

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识图谱_54

五、实验

  作者进行了几个实验,实验如下图所示:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_Text_55


其中N%KB表示的是知识库设置(参考《Open domain question answering using early fusion of knowledge bases and text.》),Hit@N表示的是按照相似度降序排列的前N个记录里是否存在真实的记录,Hit@1则表示只有模型排序后计算的相似度最大的答案与真实答案一致时才算预测正确。  另外消融实验验证模型的组件的作用:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_知识库问答_56


  case study部分给出一些样例:

论文解读:Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader_结构化_57


证明我们的方法可以有效的在知识库没有答案的条件下从文本中获取知识,但也存在一些问题,例如我们的模型可以预测正确的答案类型,但却给出错误的答案实体;另外可能给出的实体不符合约束(例如带有逻辑判断的问句);另外也可能因为噪声的原因(知识库中可能有错误的答案实体)影响效果,这在未来是一个改进的部分。


标签:知识库,Knowledge,语义,over,实体,Answering,问句,信息,Reader
From: https://blog.51cto.com/u_15919249/5959872

相关文章