首页 > 其他分享 >pytorch中神经网络的学习记录

pytorch中神经网络的学习记录

时间:2022-09-30 15:26:38浏览次数:52  
标签:__ nn 记录 self torch 神经网络 pytorch model size

(记录疑惑点,部分内容省略)

神经网络的构造:Module 类是 nn 模块里提供的一个模型构造类,是所有神经⽹网络模块的基类,我们可以继承它来定义我们想要的模型。

多层感知机(MLP 类)重载了 Module 类的 init 函数(创建模型参数)和 forward 函数(定义前向计算--即正向传播),

⽆须定义反向传播函数,系统将通过⾃动求梯度⽽自动⽣成反向传播所需的 backward 函数。

没有将 Module 类命名为 Layer (层)或者 Model (模型)之类的名字的类是一个可供⾃由组建的部件。

import torch
from torch import nn

class MLP(nn.Module):
  # 声明带有模型参数的层,这里声明了两个全连接层
  def __init__(self, **kwargs):
    # 调用MLP父类Block的构造函数来进行必要的初始化。这样在构造实例时还可以指定其他函数
    super(MLP, self).__init__(**kwargs)
    self.hidden = nn.Linear(784, 256)  # 第一层 
    self.act = nn.ReLU()
    self.output = nn.Linear(256,10)  # 第二层
    
   # 定义模型的前向计算,即如何根据输入x计算返回所需要的模型输出
  def forward(self, x):
    o = self.act(self.hidden(x))
    return self.output(o) 
net = MLP()
print(net)

神经网络中的常见的层:不含模型参数的层、含模型参数的层、二维卷积层、池化层。
(不含模型参数的层)构造的 MyLayer 类通过继承 Module 类自定义了一个将输入减掉均值后输出的层,并将层的计算定义在了 forward 函数里。

import torch
from torch import nn

class MyLayer(nn.Module):
    def __init__(self, **kwargs):
        super(MyLayer, self).__init__(**kwargs)
    def forward(self, x):
        return x - x.mean() 
layer = MyLayer()  # 实例化
layer(torch.tensor([1, 2, 3, 4, 5], dtype=torch.float))  # 前向计算

(含模型参数的层)使⽤ ParameterList 和 ParameterDict 分别定义参数的列表和字典。

class MyListDense(nn.Module):
    def __init__(self):
        super(MyListDense, self).__init__()
        self.params = nn.ParameterList([nn.Parameter(torch.randn(4, 4)) for i in range(3)])
        self.params.append(nn.Parameter(torch.randn(4, 1)))

    def forward(self, x):
        for i in range(len(self.params)):
            x = torch.mm(x, self.params[i])
        return x
net = MyListDense()
print(net)
class MyDictDense(nn.Module):
    def __init__(self):
        super(MyDictDense, self).__init__()
        self.params = nn.ParameterDict({
                'linear1': nn.Parameter(torch.randn(4, 4)),
                'linear2': nn.Parameter(torch.randn(4, 1))
        })
        self.params.update({'linear3': nn.Parameter(torch.randn(4, 2))}) # 新增

    def forward(self, x, choice='linear1'):
        return torch.mm(x, self.params[choice])

net = MyDictDense()
print(net)

(二维卷积层)将输入和卷积核做互相关运算,并加上一个标量偏差来得到输出。

填充(padding)是指在输⼊高和宽的两侧填充元素(通常是0元素)。
当卷积核的高和宽不同时,也可以通过设置高和宽上不同的填充数使输出和输入具有相同的高和宽。
卷积窗口从输入数组的最左上方开始,按从左往右、从上往下 的顺序,依次在输⼊数组上滑动。将每次滑动的行数和列数称为步幅(stride)。
填充可以增加输出的高和宽。这常用来使输出与输入具有相同的高和宽,步幅可以减小输出的高和宽。

import torch
from torch import nn

# 卷积运算(二维互相关)
def corr2d(X, K): 
    h, w = K.shape
    X, K = X.float(), K.float()
    Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))
    for i in range(Y.shape[0]):
        for j in range(Y.shape[1]):
            Y[i, j] = (X[i: i + h, j: j + w] * K).sum()
    return Y

# 二维卷积层
class Conv2D(nn.Module):
    def __init__(self, kernel_size):
        super(Conv2D, self).__init__()
        self.weight = nn.Parameter(torch.randn(kernel_size))
        self.bias = nn.Parameter(torch.randn(1))

    def forward(self, x):
        return corr2d(x, self.weight) + self.bias

(池化层)每次对输入数据的一个固定形状窗口(⼜称池化窗口)中的元素计算输出。
不同于卷积层里计算输⼊和核的互相关性,池化层直接计算池化窗口内元素的最大值(最大池化)或者平均值(平均池化)。

import torch
from torch import nn

def pool2d(X, pool_size, mode='max'):
    p_h, p_w = pool_size
    Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))
    for i in range(Y.shape[0]):
        for j in range(Y.shape[1]):
            if mode == 'max':
                Y[i, j] = X[i: i + p_h, j: j + p_w].max()
            elif mode == 'avg':
                Y[i, j] = X[i: i + p_h, j: j + p_w].mean()
    return Y
X = torch.tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]], dtype=torch.float)
pool2d(X, (2, 2), 'avg')

lenet:一个模型的可学习参数可以通过net.parameters()返回,清零所有参数的梯度缓存(net.zero_grad()),
然后进行随机梯度的反向传播(out.backward(torch.randn(1, 10)))
import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 输入图像channel:1;输出channel:6;5x5卷积核
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        # 2x2 Max pooling
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # 如果是方阵,则可以只使用一个数字进行定义
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # 除去批处理维度的其他所有维度
        num_features = 1
        for s in size:
            num_features *= s
        return num_features


net = Net()
print(net)

初始化:

# 对conv进行kaiming初始化
torch.nn.init.kaiming_normal_(conv.weight.data)
conv.weight.data
# 对linear进行常数初始化
torch.nn.init.constant_(linear.weight.data,0.3)
linear.weight.data

def initialize_weights(self):
	for m in self.modules():
		# 判断是否属于Conv2d
		if isinstance(m, nn.Conv2d):
			torch.nn.init.xavier_normal_(m.weight.data)
			# 判断是否有偏置
			if m.bias is not None:
				torch.nn.init.constant_(m.bias.data,0.3)
		elif isinstance(m, nn.Linear):
			torch.nn.init.normal_(m.weight.data, 0.1)
			if m.bias is not None:
				torch.nn.init.zeros_(m.bias.data)
		elif isinstance(m, nn.BatchNorm2d):
			m.weight.data.fill_(1) 		 
			m.bias.data.zeros_()
# 模型的定义
class MLP(nn.Module):
  # 声明带有模型参数的层,这里声明了两个全连接层
  def __init__(self, **kwargs):
    # 调用MLP父类Block的构造函数来进行必要的初始化。这样在构造实例时还可以指定其他函数
    super(MLP, self).__init__(**kwargs)
    self.hidden = nn.Conv2d(1,1,3)
    self.act = nn.ReLU()
    self.output = nn.Linear(10,1)
    
   # 定义模型的前向计算,即如何根据输入x计算返回所需要的模型输出
  def forward(self, x):
    o = self.act(self.hidden(x))
    return self.output(o)

mlp = MLP()
print(list(mlp.parameters()))
print("-------初始化-------")

initialize_weights(mlp)
print(list(mlp.parameters()))

损失函数:

计算二分类任务时的交叉熵(Cross Entropy)函数。在二分类中,label是{0,1}。对于进入交叉熵函数的input为概率分布的形式。一般来说,input为sigmoid激活层的输出,或者softmax的输出

torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean')

计算交叉熵函数

torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')

 计算输出y和真实标签target之间的差值的绝对值

torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')

计算输出y和真实标签target之差的平方

torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')

 L1的平滑输出,其功能是减轻离群点带来的影响

torch.nn.SmoothL1Loss(size_average=None, reduce=None, reduction='mean', beta=1.0)

 泊松分布的负对数似然损失函数

torch.nn.PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')

计算KL散度,也就是计算相对熵。用于连续分布的距离度量,并且对离散采用的连续输出空间分布进行回归通常很有用

torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)

 计算两个向量之间的相似度,用于排序任务。该方法用于计算两组数据之间的差异

torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')

对于多标签分类问题计算损失函数

torch.nn.MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean')

计算二分类的 logistic 损失

torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean')torch.nn.(size_average=None, reduce=None, reduction='mean')

计算多分类的折页损失

torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean')

计算三元组损失

torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')

对输出的embedding结果做Hing损失计算

torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean')

对两个向量做余弦相似度

torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')

用于解决时序类数据的分类

torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False)

训练与评估:

def train(epoch):
   model.train() # 训练状态 train_loss = 0 for data, label in train_loader: # 用for循环读取DataLoader中的全部数据 data, label = data.cuda(), label.cuda() # 将数据放到GPU上用于后续计算 optimizer.zero_grad() # 开始用当前批次数据做训练时,应当先将优化器的梯度置零 output = model(data) # 将data送入模型中训练 loss = criterion(label, output) # 根据预先定义的criterion计算损失函数 loss.backward() # 将loss反向传播回网络 optimizer.step() # 使用优化器更新模型参数 train_loss += loss.item()*data.size(0) train_loss = train_loss/len(train_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
 
def val(epoch):       
    model.eval()  #验证/测试状态
  val_loss = 0 with torch.no_grad(): for data, label in val_loader: data, label = data.cuda(), label.cuda() output = model(data) preds = torch.argmax(output, 1) loss = criterion(output, label) val_loss += loss.item()*data.size(0) running_accu += torch.sum(preds == label.data) val_loss = val_loss/len(val_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, val_loss))

模块定义:

class Up(nn.Module):
    """Upscaling then double conv"""

    def __init__(self, in_channels, out_channels, bilinear=False):
        super().__init__()

        # if bilinear, use the normal convolutions to reduce the number of channels
        if bilinear:
            self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
            self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
        else:
            self.up = nn.ConvTranspose2d(in_channels, in_channels // 2, kernel_size=2, stride=2)
            self.conv = DoubleConv(in_channels, out_channels)

    def forward(self, x1, x2):
        x1 = self.up(x1)
        # input is CHW
        diffY = x2.size()[2] - x1.size()[2]
        diffX = x2.size()[3] - x1.size()[3]

        x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
                        diffY // 2, diffY - diffY // 2])
        # if you have padding issues, see
        # https://github.com/HaiyongJiang/U-Net-Pytorch-Unstructured-Buggy/commit/0e854509c2cea854e247a9c615f175f76fbb2e3a
        # https://github.com/xiaopeng-liao/Pytorch-UNet/commit/8ebac70e633bac59fc22bb5195e513d5832fb3bd
        x = torch.cat([x2, x1], dim=1)
        return self.conv(x)

 

class UNet(nn.Module):
    def __init__(self, n_channels, n_classes, bilinear=False):
        super(UNet, self).__init__()
        self.n_channels = n_channels
        self.n_classes = n_classes
        self.bilinear = bilinear
     ···
        factor = 2 if bilinear else 1
        ···
        self.up1 = Up(1024, 512 // factor, bilinear)
        ···

    def forward(self, x):
        ···
        x4 = ···
        x5 = ···
        x = self.up1(x5, x4)
        ···
        return ···

 修改开放模型:

(修改模型层)

import torchvision.models as models
net = models.resnet50()
print(net)  # 查看网络结构
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([('fc1', nn.Linear(2048, 128)),
                          ('relu1', nn.ReLU()), 
                          ('dropout1',nn.Dropout(0.5)),
                          ('fc2', nn.Linear(128, 10)),
                          ('output', nn.Softmax(dim=1))
                          ]))
    
net.fc = classifier  # 修改模型的fc层,将全连接层加一层,同时将其输出节点数替换为10。

(添加外部输入)在倒数第二层增加一个额外的输入变量add_variable来辅助预测

class Model(nn.Module):
    def __init__(self, net):
        super(Model, self).__init__()
        self.net = net
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(0.5)
        self.fc_add = nn.Linear(1001, 10, bias=True)
        self.output = nn.Softmax(dim=1)
     
   # 通过修改forward函数(配套定义一些层),先将1000维的tensor通过激活函数层和dropout层,
   # 再和外部输入变量"add_variable"拼接,最后通过全连接层映射到指定的输出维度 def forward(self, x, add_variable): x = self.net(x) x = torch.cat((self.dropout(self.relu(x)), add_variable.unsqueeze(1)),1) #通过torch.cat实现了tensor的拼接 x = self.fc_add(x) x = self.output(x) return x
import torchvision.models as models
net = models.resnet50()
model = Model(net).cuda()  # 对修改好的模型结构进行实例化
outputs = model(inputs, add_var)  # 在输入数据的时候要给两个inputs

(添加额外输出)除了模型最后的输出外,我们需要输出模型某一中间层的结果,以施加额外的监督,

获得更好的中间层结果,修改模型定义中forward函数的return变量

class Model(nn.Module):
    def __init__(self, net):
        super(Model, self).__init__()
        self.net = net
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(0.5)
        self.fc1 = nn.Linear(1000, 10, bias=True)
        self.output = nn.Softmax(dim=1)
        
    def forward(self, x, add_variable):
        x1000 = self.net(x)
        x10 = self.dropout(self.relu(x1000))
        x10 = self.fc1(x10)
        x10 = self.output(x10)
        return x10, x1000
import torchvision.models as models
net = models.resnet50()
model = Model(net).cuda()  # 对修改好的模型结构进行实例化
out10, out1000 = model(inputs, add_var)  # 输入数据后会有两个outputs

 模型的保存与读取:(pt, pth和pkl三种数据格式均支持模型权重和整个模型的存储)

from torchvision import models
model = models.resnet152(pretrained=True)

# 保存整个模型
torch.save(model, save_dir)
# 保存模型权重
torch.save(model.state_dict, save_dir)

(单卡保存+单卡加载)

import os
import torch
from torchvision import models

os.environ['CUDA_VISIBLE_DEVICES'] = '0'   #这里替换成希望使用的GPU编号
model = models.resnet152(pretrained=True)
model.cuda()

# 保存+读取整个模型
torch.save(model, save_dir)
loaded_model = torch.load(save_dir)
loaded_model.cuda()

# 保存+读取模型权重
torch.save(model.state_dict(), save_dir)
loaded_dict = torch.load(save_dir)
loaded_model = models.resnet152()   #注意这里需要对模型结构有定义
loaded_model.state_dict = loaded_dict
loaded_model.cuda()

(单卡保存+多卡加载)

import os
import torch
from torchvision import models

os.environ['CUDA_VISIBLE_DEVICES'] = '0'   #这里替换成希望使用的GPU编号
model = models.resnet152(pretrained=True)
model.cuda()

# 保存+读取整个模型
torch.save(model, save_dir)

os.environ['CUDA_VISIBLE_DEVICES'] = '1,2'   #这里替换成希望使用的GPU编号
loaded_model = torch.load(save_dir)
loaded_model = nn.DataParallel(loaded_model).cuda()

# 保存+读取模型权重
torch.save(model.state_dict(), save_dir)

os.environ['CUDA_VISIBLE_DEVICES'] = '1,2'   #这里替换成希望使用的GPU编号
loaded_dict = torch.load(save_dir)
loaded_model = models.resnet152()   #注意这里需要对模型结构有定义
loaded_model.state_dict = loaded_dict
loaded_model = nn.DataParallel(loaded_model).cuda()

(多卡保存+单卡加载

import os
import torch
from torchvision import models

os.environ['CUDA_VISIBLE_DEVICES'] = '1,2'   #这里替换成希望使用的GPU编号

model = models.resnet152(pretrained=True)
model = nn.DataParallel(model).cuda()

# 保存+读取整个模型
torch.save(model, save_dir)

os.environ['CUDA_VISIBLE_DEVICES'] = '0'   #这里替换成希望使用的GPU编号
loaded_model = torch.load(save_dir)
loaded_model = loaded_model.module
# 保存+读取模型权重
torch.save(model.state_dict(), save_dir)

os.environ['CUDA_VISIBLE_DEVICES'] = '0'   #这里替换成希望使用的GPU编号
loaded_dict = torch.load(save_dir)
loaded_model = models.resnet152()   #注意这里需要对模型结构有定义
loaded_model.load_state_dict({k.replace('module.', ''): v for k, v in loaded_dict.items()})

 (多卡保存+多卡加载

import os
import torch
from torchvision import models

os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2'   #这里替换成希望使用的GPU编号

model = models.resnet152(pretrained=True)
model = nn.DataParallel(model).cuda()
# 保存+读取模型类似(多卡保存)
# 保存+读取模型权重,强烈建议!!
torch.save(model.state_dict(), save_dir)
loaded_dict = torch.load(save_dir)
loaded_model = models.resnet152()   #注意这里需要对模型结构有定义
loaded_model = nn.DataParallel(loaded_model).cuda()
loaded_model.state_dict = loaded_dict

如果只有保存的整个模型,也可以采用提取权重的方式构建新的模型:

# 读取整个模型
loaded_whole_model = torch.load(save_dir)
loaded_model = models.resnet152()   #注意这里需要对模型结构有定义
loaded_model.state_dict = loaded_whole_model.state_dict
loaded_model = nn.DataParallel(loaded_model).cuda()

所有对于loaded_model修改权重字典的形式都是通过赋值来实现的

loaded_model.load_state_dict(loaded_dict)

以类的方式自定义损失函数:

ALPHA = 0.8
GAMMA = 2

class FocalLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(FocalLoss, self).__init__()

    def forward(self, inputs, targets, alpha=ALPHA, gamma=GAMMA, smooth=1):
        inputs = F.sigmoid(inputs)       
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
        BCE_EXP = torch.exp(-BCE)
        focal_loss = alpha * (1-BCE_EXP)**gamma * BCE
                       
        return focal_loss
# 使用方法    
criterion = FocalLoss()
loss = criterion(input,targets)

动态调整学习率自定义:

需要学习率每30轮下降为原来的1/10

def adjust_learning_rate(optimizer, epoch):
    lr = args.lr * (0.1 ** (epoch // 30))
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr
optimizer = torch.optim.SGD(model.parameters(),lr = args.lr,momentum = 0.9)
for epoch in range(10):
    train(...)
    validate(...)
    adjust_learning_rate(optimizer,epoch)

训练特定层:

def set_parameter_requires_grad(model, feature_extracting):
    if feature_extracting:
        for param in model.parameters():
            param.requires_grad = False  # 冻结部分层
import torchvision.models as models
# 冻结参数的梯度
feature_extract = True
model = models.resnet18(pretrained=True)
set_parameter_requires_grad(model, feature_extract)
# 修改模型
num_ftrs = model.fc.in_features
model.fc = nn.Linear(in_features=num_ftrs, out_features=4, bias=True)  # 修改后的全连接层的参数就是可计算梯度的

使用argparse进行调参

(为了使代码更加简洁和模块化,一般会将有关超参数的操作写在config.py,然后在train.py或者其他文件导入就可以)

import argparse  
  
def get_options(parser=argparse.ArgumentParser()):  # def 函数名(创建对象):
    # 添加参数
    parser.add_argument('--workers', type=int, default=0,  
                        help='number of data loading workers, you had better put it '  
                              '4 times of your gpu')  
   # (名称,类型,默认值,提示)
    parser.add_argument('--batch_size', type=int, default=4, help='input batch size, default=64')  
  
    parser.add_argument('--niter', type=int, default=10, help='number of epochs to train for, default=10')  
  
    parser.add_argument('--lr', type=float, default=3e-5, help='select the learning rate, default=1e-3')  
  
    parser.add_argument('--seed', type=int, default=118, help="random seed")  
  
    parser.add_argument('--cuda', action='store_true', default=True, help='enables cuda')  
    parser.add_argument('--checkpoint_path',type=str,default='',  
                        help='Path to load a previous trained model if not empty (default empty)')  
    parser.add_argument('--output',action='store_true',default=True,help="shows output")  
   # 解析函数
    opt = parser.parse_args()  
  
    if opt.output:  
        print(f'num_workers: {opt.workers}')  
        print(f'batch_size: {opt.batch_size}')  
        print(f'epochs (niters) : {opt.niter}')  
        print(f'learning rate : {opt.lr}')  
        print(f'manual_seed: {opt.seed}')  
        print(f'cuda enable: {opt.cuda}')  
        print(f'checkpoint_path: {opt.checkpoint_path}')  
  
    return opt  
  
if __name__ == '__main__':  
    opt = get_options()

命令行:python config.py --lr 3e-4 --batch_size 32

# 导入必要库
...
import config

opt = config.get_options()

manual_seed = opt.seed
num_workers = opt.workers
batch_size = opt.batch_size
lr = opt.lr
niters = opt.niters
checkpoint_path = opt.checkpoint_path

# 随机数的设置,保证复现结果
def set_seed(seed):
    torch.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    random.seed(seed)
    np.random.seed(seed)
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True

...


if __name__ == '__main__':
	set_seed(manual_seed)
	for epoch in range(niters):
		train(model,lr,batch_size,num_workers,checkpoint_path)
		val(model,lr,batch_size,num_workers,checkpoint_path)

使用TensorBoard可视化:

from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('./runs')  # 需要先指定一个文件夹供TensorBoard保存记录下来的数据

命令行: tensorboard --logdir=指定的保存tensorboard记录结果的文件路径 --port=xxxx

模型结构可视化:

定义模型:

import torch.nn as nn

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3)
        self.pool = nn.MaxPool2d(kernel_size = 2,stride = 2)
        self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5)
        self.adaptive_pool = nn.AdaptiveMaxPool2d((1,1))
        self.flatten = nn.Flatten()
        self.linear1 = nn.Linear(64,32)
        self.relu = nn.ReLU()
        self.linear2 = nn.Linear(32,1)
        self.sigmoid = nn.Sigmoid()

    def forward(self,x):
        x = self.conv1(x)
        x = self.pool(x)
        x = self.conv2(x)
        x = self.pool(x)
        x = self.adaptive_pool(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.relu(x)
        x = self.linear2(x)
        y = self.sigmoid(x)
        return y

model = Net()
print(model)
writer.add_graph(model, input_to_model = torch.rand(1, 3, 224, 224))  # 给定一个输入数据,前向传播后得到模型的结构
writer.close()

连续参数:

writer1 = SummaryWriter('./pytorch_tb/x')
writer2 = SummaryWriter('./pytorch_tb/y')
for i in range(500):
    x = i
    y = x*2
    writer1.add_scalar("same", x, i) #日志中记录x在第step i 的值
    writer2.add_scalar("same", y, i) #日志中记录y在第step i 的值
writer1.close()
writer2.close()

参数分布:

import torch
import numpy as np

# 创建正态分布的张量模拟参数矩阵
def norm(mean, std):
    t = std * torch.randn((100, 20)) + mean
    return t
 
writer = SummaryWriter('./pytorch_tb/')
for step, mean in enumerate(range(-10, 10, 1)):
    w = norm(mean, 1)
    writer.add_histogram("w", w, step)
    writer.flush()
writer.close()

 

标签:__,nn,记录,self,torch,神经网络,pytorch,model,size
From: https://www.cnblogs.com/yuyongzhen-98/p/16743896.html

相关文章

  • vs2022 生成低版本代码遇到的问题记录
    因个人电脑误删数据,导致工作相关文件都被删除。重新安装系统,重新安装了高版本的VS2022。在进行项目处理时遇到如下两个与nuget相关问题,记录。1.项目使用nuget恢复包,选择恢......
  • 【2022.09.30】记录一次mastodon网络故障修复
    前天的时候使用mastodon连不上另外一个站点了,在后台疯狂报错重试我还以为mastodon的docker出问题了,在谷歌和git上面找了好久的问题,愣是没找到相关的原因但奇怪的是,虽然......
  • SpringBoot之Mybatis开启SQL记录和Pagehelper
    配置mybatismybatis:#mapper路径mapper-locations:classpath:mapper/*.xmlconfiguration:#日志输出log-impl:org.apache.ibatis.logging.stdout.StdO......
  • Qt问题记录
    9.29问题记录QJsonValue封装的具体数据类型由null,bool,double,string,QJsonObject,QJsonArray六种类型组成,使用toDouble(),toInt(),toBool(),toObject(),toArray(),toString()......
  • 通用的逐条数据自动记录
    经过观察,多数关注,收藏类列表数据都缺乏良好的记录机制(网易云歌单没有版权会自动变灰偶尔会给删掉,bilibili收藏视频常自动无效不显示,淘宝购物车转收藏会乱序)缺少逐条加入,变......
  • python MLPRegressor神经网络回归预测
       '''载入数据'''fromsklearnimportdatasetsimportsklearnboston=datasets.load_boston()x,y=boston.data,boston.target'''引入标准化函数'''from......
  • 工具软件发现(好用的网站地址 记录)
     1、​​Mybatipse​​ 一款Eclipse插件,当编写MyBatis的关联文件的时候,用于提供内容提示和校验源码地址:​​https://github.com/mybatis/mybatipse​​安......
  • 记录一次NoSuchMethodError问题的解决
    一、问题描述今天在执行单元测试时遇到了一个NoSuchMethodError错误,完整的报错信息如下:...Causedby:javax.validation.ValidationException:HV000183:Unabletoini......
  • 【问题】记录spark查询ES,数据重复的问题
    真实环境遇到spark查询ES,出现数据重复的现象。记录一下整个背景和解决问题过程。记录过程比较简单,真实排查过程艰难定位到最终原因记录spark查询ES,数据重复的问题1.环境......
  • 做题记录整理dp15 P1772. [ZJOI2006] 物流运输(2022/9/28)
    P3648[APIO2014]序列分割第一次做斜率优化的题题解看懂了,但没有完全看懂等以后得回来看一次#include<bits/stdc++.h>#definefor1(i,a,b)for(inti=a;i<=b;i++)......