首页 > 其他分享 >论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification

时间:2022-12-21 14:35:41浏览次数:81  
标签:输出 Term Based 模型 Attention Relation LSTM 句子


论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification

  在基于深度学习的知识图谱构建过程中,知识抽取环节中的实体关系抽取至关作用。本博文将解读2016年由中国科学技术大学Peng Zhou等在ACL发表的论文《Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification》,然后对源码进行详解,并给出实例运行效果。

一、全文摘要译文

  关系抽取(Relation Classification / Relation Extraction)是自然语言处理领域中重要的语义处理任务。现有的模型仍然依赖于一些基于词汇库系统例如WordNet或基于NLP的依存路径和命名实体识别(NER)来获取最优特征。另外一个问题是往往可以描述整个句子的重要的信息内容可能存在句子的任何一个位置。为了解决这个问题,我们提出一个基于注意力机制的长短期记忆神经网络模型(Att-BILSTM)来获取一个句子中最重要的信息。实验结果表明在SemEval-2010 关系分类数据集上我们的模型更优于其他现有的模型,且仅适用了词向量。

二、简要信息

序号

属性


1

模型名称

Att-BiLSTM

2

所属领域

自然语言处理

3

研究内容

实体关系分类

4

核心内容

词向量/神经网络/注意力机制

5

GitHub源码

​Attention-Based-BiLSTM-relation-extraction​

6

论文PDF

​http://aclweb.org/anthology/Y/Y15/Y15-1009.pdf​

三、算法模型详解

  模型主要包括五部分,分别是输入层、词嵌入层、BlLSTM层、Attention层和输出层。模型结构如图所示:

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读

3.1 输入层

  输入层输入的是以句子为单位的样本。

3.2 Word Embeddings

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_sed_02 个字符的句子:
论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_03

其中 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_sed_04表示每一个单词或字符,其用向量 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Bi-LSTM_05 表示。设 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_06 表示词汇表,论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_07 为超参数,表示词嵌入的维度,论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_08 表示one-hot编码(当前词为1,其余为0),则待训练的词嵌入矩阵为 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_09论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_10,因此对于当前句子 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_sed_11 词嵌入矩阵为 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_12

3.3 Bi-LSTM结构

  双向LSTM是RNN的一种改进,其主要包括前后向传播,每个时间点包含一个LSTM单元用来选择性的记忆、遗忘和输出信息。LSTM单元的公式如下:
论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_sed_13

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_14

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_15

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_16

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_17

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_18

模型的输出包括前后向两个结果,通过拼接 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_19

3.4 Attention结构

  由于LSTM获得每个时间点的输出信息之间的“影响程度”都是一样的,而在关系分类中,为了能够突出部分输出结果对分类的重要性,引入加权的思想,注意力机制本质上就是加权求和。
  2.3节所示的输出 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_20,则设一待训练的矩阵参数 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_实体关系分类_21,其满足如下公式:
论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Bi-LSTM_22

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_23

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Bi-LSTM_24

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_25 即为注意力权重系数,论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_26 则为LSTM输出 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Bi-LSTM_27 经过加权求和后的结果,最后通过非线性函数生成表征向量 论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_28

3.5 损失函数

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_29 通过全连接网络映射到类标向量上,通过softmax求得预测的类标,损失函数定义为:
论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_Attention_30

四、实验及分析

  实验数据集为 SemEval-2010 Task 8,该数据集包含8000个训练句子,2717个测试句子,一共包含9个关系类和一个Other关系类,若考虑关系双向性则可认为是19个类。实验相关设置如下表:

序号

属性


1

数据集

SemEval-2010 Task 8

2

最优化模型

AdaDelta

3

学习率rate

1.0

4

batch_size

10

5

正则项系数

6

dropout

0.5

7

学习率衰减

0.9

8

词嵌入维度

300

实验结果如图所示:

论文解读:Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification_论文解读_33

五、论文总结与评价

  文章内容结构简单,模型不是很复杂。该模型主要用实体关系分类,也可以用于文本分类或情感分类等。模型主要是将Attention与LSTM进行结合,模型可改进地方有许多。


标签:输出,Term,Based,模型,Attention,Relation,LSTM,句子
From: https://blog.51cto.com/u_15919249/5959909

相关文章