首页 > 其他分享 >Pytorch 动态图, Autograd, grad_fn详解

Pytorch 动态图, Autograd, grad_fn详解

时间:2022-12-26 11:44:50浏览次数:92  
标签:incoming Autograd Pytorch pytorch 动态图 gradients grad fn

Pytorch 动态图 Autograd grad_fn详解

Autograd

require_grad 具有传递性,会将其结果也引入计算图中

requires_grad is contagious. It means that when a Tensor is created by operating on other Tensors, the requires_grad of the resultant Tensor would be set True given at least one of the tensors used for creation has it's requires_grad set to True.

If requires_grad is set to False, grad_fn would be None.

局部偏导

pytorch会储存输入变量,用于计算梯度

local gradients

def backward (incoming_gradients):
    self.Tensor.grad = incoming_gradients
    for inp in self.inputs:
        if inp.grad_fn is not None:
            new_incoming_gradients = //
              incoming_gradient * local_grad(self.Tensor, inp)
            inp.grad_fn.backward(new_incoming_gradients)
        else:
            pass

backward()其实就是传递了一个外部梯度

不可以两次调用梯度

This is because the non-leaf buffers gets destroyed the first time backward() is called and hence, there’s no path to navigate to the leaves when backward is invoked the second time

如果设置retain_graph = True,那么两次的梯度将会相加

If you do the above, you will be able to backpropagate again through the same graph and the gradients will be accumulated, i.e. the next you backpropagate, the gradients will be added to those already stored in the previous back pass.

参考链接

[1] “1 Overview of PyTorch Autograd Engine.” https://www.pytorch.org (accessed Dec. 26, 2022).
[2] “2 How Computational Graphs are Constructed in PyTorch.” https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/ (accessed Dec. 26, 2022).
[3] “3 How Computational Graphs are Executed in PyTorch.” https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/ (accessed Dec. 26, 2022).

标签:incoming,Autograd,Pytorch,pytorch,动态图,gradients,grad,fn
From: https://www.cnblogs.com/zxyfrank/p/17005178.html

相关文章