文章目录
- 交叉熵损失函数`torch.nn.CrossEntropyLoss`
- F.cross_entropy
- F.nll_loss
交叉熵损失函数torch.nn.CrossEntropyLoss
- weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size
C
每个类别计算损失的权重 - size_average (bool, optional): Deprecated (see :attr:
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
- ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
is True
, the loss is averaged over non-ignored targets. - reduce (bool, optional): Deprecated (see :attr:
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:size_average
. When :attr:reduce
is False
, returns a loss per batch element instead and ignores :attr:size_average
. Default: True
- reduction (string, optional): Specifies the reduction to apply to the output: ‘none’ | ‘mean’ | ‘sum’.
- ‘none’: no reduction will be applied,
- ‘mean’: the sum of the output will be divided by the number of elements in the output
- ‘sum’: the output will be summed.
- Note: :attr:
size_average
and :attr:reduce
are in the process of being deprecated, and in the meantime,specifying either of those two args will override :attr:reduction
. Default: ‘mean’
简单来说,三个参数:weight
、ignore
、reduction
-
weight
调整每个类别的权重 -
ignore_index
不计算损失的index,例如padding的index不计算损失 -
reduction
控制loss的计算模式[none,mean,sum]
import torch.nn.functional as F
input = torch.randn(3,5)
label = torch.empty(3, dtype=torch.long).random_(5) # -> tensor([1, 3, 0])
res = F.cross_entropy(input, label)
>>> tensor(1.8942)
res_mean = F.cross_entropy(input, label, reduction='mean')
>>> tensor(1.8942)
res_sum = F.cross_entropy(input, label, reduction='sum')
>>> tensor(5.6826)
res_none = F.cross_entropy(input, label, reduction='none')
>>>tensor([1.3254, 2.9982, 1.3590])
res_ignore0 = F.cross_entropy(input, label, reduction='none', ignore_index=0)
>>>tensor([1.3254, 2.9982, 0.0000])
F.cross_entropy
torch.nn.CrossEntropyLoss
调用了函数F.cross_entropy
,与tf中不同的是,F.cross_entropy
执行包含两部分log_softmax
和F.nll_loss
log_softmax
主要用于解决函数overflow和underflow,加快运算速度,提高数据稳定性。
softmax会进行指数操作,当输入比较大,会产生overflow;当输入为负数且绝对值也很大,会使得分子和分母很小,有可能四舍五入向下溢出。
在数学表达式是对softmax取对数,实际运算是通过下列式子:
其中,M为所有中最大的值。
F.nll_loss
F.nll_loss
表示The negative log likelihood loss.
log似然代价函数
log_softmax与softmax的区别在哪里?pytorch的F.cross_entropy交叉熵函数