深度学习--卷积神经网络基础
1.卷积操作
卷积操作简单来说就是矩阵对应位置相乘求和,这样不仅可以减少模型的参数数量,还可以关注到图像的局部相关特性。
import torch
import torch.nn as nn
import torch.nn.functional as F
#卷积操作(Input_channel:输入的通道数,kernel_channels:卷积核的数量,kernel_size:卷积核的大小,stride:步长,padding:边缘补足)
layer = nn.Conv2d(1,3,kernel_size=3,stride=1,padding=0)
x=torch.rand(1,1,28,28)
out=layer.forward(x)
out.shape
#torch.Size([1, 3, 26, 26])
layer = nn.Conv2d(1,3,3,1,1)
out=layer.forward(x)
out.shape
#torch.Size([1, 3, 28, 28])
layer = nn.Conv2d(1,16,3,2,1)
out=layer.forward(x)
out.shape
#torch.Size([1, 16, 14, 14])
layer.weight
#Parameter containing:
#tensor([[[[-0.1798, -0.1656, 0.1464],
# [ 0.1882, 0.2773, -0.3111],
# [-0.1793, 0.0608, 0.0770]]],
#
#
# [[[-0.3308, -0.0402, -0.3012],
# [-0.1773, -0.1429, 0.2020],
# [-0.0483, -0.0098, 0.3240]]],
#
#
# [[[-0.2946, 0.2950, -0.1390],
# [-0.2534, -0.2021, 0.3280],
# [-0.1135, 0.1895, -0.3254]]]], requires_grad=True)
2.池化与采样
池化操作是卷积神经网络中的一个特殊操作,主要就是在一定区域内提取出该区域的关键性信息,其操作往往出现在卷积层之后,其能起到减少卷积层输出特征量数目的作用,从而能减少模型参数,同时能改善过拟合现象。
根据不同的操作分为:
- pooling(下采样):把图像变小,例如 4 * 4 -> 2 * 2
- upsample(上采样):把图像变大
- ReLU
#池化与采样 nn.MaxPool2d(池化窗口的大小,stride:步长)
x=torch.rand(1,16,28,28)
layer=nn.MaxPool2d(2,2)
out=layer(x)
out.shape
#torch.Size([1, 16, 14, 14])
#上采样,放大
x=out
out = F.interpolate(x,scale_factor=2,mode='nearest')
out.shape
#torch.Size([1, 16, 28, 28])
3. normlization
这个地方没有太清楚是在干啥!!!
#特征缩放
#图片特征Image Normalization mean=[R ,G,B]的normalize
#normalize = transforms.Normalize(mean=[0.485 ,0.456,0.406],std=[0.229,0.224,0.225])
#批标准化 Batch Normalization
#Batch Norm , Layer Norm ,Instance Norm ,Group Norm
#一些好处:收敛更快,性能更好,鲁棒性好
x=torch.randn(1,16,7,7)
x.shape
#torch.Size([1, 16, 7, 7])
layer=nn.BatchNorm2d(16)
out=layer(x)
layer.weight
#Parameter containing:
#tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
# requires_grad=True)
layer.weight.shape
#torch.Size([16])
layer.bias.shape
#torch.Size([16])
vars(layer) #可以查看相关信息
标签:layer,nn,16,--,torch,卷积,神经网络,out
From: https://www.cnblogs.com/ssl-study/p/17347753.html