首页 > 其他分享 >AlexNet深度卷积神经网络——pytorch版

AlexNet深度卷积神经网络——pytorch版

时间:2023-08-06 14:56:20浏览次数:53  
标签:kernel 12 nn 卷积 ReLU padding pytorch AlexNet size

import torch
from torch import nn
from d2l import torch as d2l

net = nn.Sequential(
    # (224-11+1+2)/4=54
    nn.Conv2d(1,96,kernel_size=11,stride=4,padding=1),nn.ReLU(),
    # (54-3+1)/2=26
    nn.MaxPool2d(kernel_size=3,stride=2),
    # (26+4-5+1)=26
    nn.Conv2d(96,256,kernel_size=5,padding=2),nn.ReLU(),
    # (26-3+1)/2=12
    nn.MaxPool2d(kernel_size=3,stride=2),
    # 12-3+1+2=12
    nn.Conv2d(256,384,kernel_size=3,padding=1),nn.ReLU(),
    # 12-3+1+2=12
    nn.Conv2d(384,384,kernel_size=3,padding=1),nn.ReLU(),
    # 12+2-3+1=12
    nn.Conv2d(384, 384, kernel_size=3, padding=1), nn.ReLU(),
    # 12-3+1+2=12
    nn.Conv2d(384,256,kernel_size=3,padding=1),nn.ReLU(),
    # (12-3+1)/2=5
    nn.MaxPool2d(kernel_size=3,stride=2),nn.Flatten(),
    # 256*5*5=6400
    nn.Linear(6400,4096),nn.ReLU(),nn.Dropout(p=0.5),
    nn.Linear(4096,4096),nn.ReLU(),nn.Dropout(p=0.5),
    nn.Linear(4096,10)
)

x=torch.randn(1,1,224,224)
for layer in net:
    x=layer(x)
    print(layer.__class__.__name__,'output shape:\t',x.shape)


batch_size=128
train_iter,test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size,resize=224)

lr,num_epochs = 0.01,10
d2l.train_ch6(net,train_iter,test_iter,num_epochs,lr,d2l.try_gpu())

 

标签:kernel,12,nn,卷积,ReLU,padding,pytorch,AlexNet,size
From: https://www.cnblogs.com/jinbb/p/17609409.html

相关文章

  • VGG使用块的网络——pytorch版
    importtorchfromtorchimportnnfromd2limporttorchasd2ldefvgg_block(num_convs,in_channels,out_channels):layers=[]for_inrange(num_convs):layers.append(nn.Conv2d(in_channels,out_channels,kernel_size=3,padding=......
  • NiN网络——pytorch版
    importtorchfromtorchimportnnfromd2limporttorchasd2ldefnin_block(in_channels,out_channels,kernel_size,strides,padding):returnnn.Sequential(nn.Conv2d(in_channels,out_channels,kernel_size,strides,padding),nn.ReLU(),nn.Co......
  • GoogLeNet网络——pytorch版
    importtorchfromtorchimportnnfromtorch.nnimportfunctionalasFfromd2limporttorchasd2lclassInception(nn.Module):#c1-c4是每条路径的输出通道数def__init__(self,in_channels,c1,c2,c3,c4,**kwargs):super(Inception,self).__init__(......
  • 步幅与填充——pytorch
    importtorchfromtorchimportnndefcomp_conv2d(conv2d,x):#在维度前面加上通道数和批量大小数1x=x.reshape((1,1)+x.shape)#得到4维y=conv2d(x)#把前面两维去掉returny.reshape(y.shape[2:])#padding填充为1,左右conv2d=nn.Conv2d......
  • 多输入多输出通道——pytorch版
    importtorchfromd2limporttorchasd2lfromtorchimportnn#多输入通道互相关运算defcorr2d_multi_in(x,k):#zip对每个通道配对,返回一个可迭代对象,其中每个元素是一个(x,k)元组,表示一个输入通道和一个卷积核#再做互相关运算returnsum(d2l.corr2d......
  • 池化层——pytorch版
    importtorchfromtorchimportnnfromd2limporttorchasd2l#实现池化层的正向传播defpool2d(x,pool_size,mode='max'):#获取窗口大小p_h,p_w=pool_size#获取偏移量y=torch.zeros((x.shape[0]-p_h+1,x.shape[1]-p_w+1))foriinrange(y.sh......
  • LeNet卷积神经网络——pytorch版
    importtorchfromtorchimportnnfromd2limporttorchasd2lclassReshape(torch.nn.Module):defforward(self,x):#批量大小默认,输出通道为1returnx.view(-1,1,28,28)net=torch.nn.Sequential(#28+4-5+1=28输出通道为6Reshape()......
  • 实现二维卷积层
    importtorchfromtorchimportnnfromd2limporttorchasd2ldefcorr2d(x,k):"""计算二维互相关运算"""#获取卷积核的高和宽h,w=k.shape#输出的高和宽y=torch.zeros((x.shape[0]-h+1,x.shape[1]-w+1))foriinrange(y.shape[0......
  • (通俗易懂)可视化详解多通道 & 多通道输入输出卷积代码实现
    以前对多通道和多通道输入输出的卷积操作不理解,今天自己在草稿纸上画图推理了一遍,终于弄懂了。希望能帮助到大家。多通道可视化一通道的2x2矩阵torch.Size([2,2])相当于torch.Size([1,2,2]),是一通道的2x2矩阵二通道的2x2矩阵torch.Size([2,2,2])代表二通道的2x2矩阵,第一个2表......
  • 6.2 手写卷积类
    importtorchfromtorchimportnnfromd2limporttorchasd2lclassConv2D(nn.Module):def__init__(self,kernel_size):super().__init__()self.weight=nn.Parameter(torch.rand(kernel_size))#如kernel_size=(2,2),则随机初始化一个2x2的卷积......