首页 > 其他分享 >6-1构建模型的3种方法

6-1构建模型的3种方法

时间:2024-07-14 16:19:35浏览次数:14  
标签:kernel nn 32 模型 stride 构建 64 方法 size

可以使用以下三种方式构建模型:

1.继承nn.Module基类构建自定义模型

2.使用nn.Sequential按层顺序构建模型

3.继承nn.Module基类构建模型并辅助应用模型容器进行封装(nn.Sequentail, nn.ModuleList, nn.ModuleDict)

其中第一种方式最为常见,第二种方式最简单,第三种方式最为灵活也较为复杂。

推荐使用第一种方式构建模型。

import torch
import torchkeras

print('torch.__version__=' + torch.__version__)
print('torchkeras.__version__=' + torchkeras.__version__)

"""
torch.__version__=2.3.1+cu121
torchkeras.__version__=3.9.6
"""

1.继承nn.Module基类构建自定义模型

以下是继承nn.Module基类构建自定义模型的一个范例。模型中的用到的层一般在__init__函数中定义,然后在forward方法中定义模型的正向传播逻辑。

from torch import nn


class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3)
        self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5)
        self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.dropout = nn.Dropout2d(p=0.1)
        self.adaptive_pool = nn.AdaptiveMaxPool2d((1, 1))  # 对于输入信号,提供二维自适应最大池化操作
        self.flatten = nn.Flatten()
        self.linear1 = nn.Linear(64, 32)
        self.relu = nn.ReLU()
        self.linear2 = nn.Linear(32, 1)

    def forward(self, x):
        x = self.conv1(x)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.pool2(x)
        x = self.dropout(x)
        x = self.adaptive_pool(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.relu(x)
        y = self.linear2(x)
        return y
net = Net()
print(net)

"""
Net(
  (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
  (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
  (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (dropout): Dropout2d(p=0.1, inplace=False)
  (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear1): Linear(in_features=64, out_features=32, bias=True)
  (relu): ReLU()
  (linear2): Linear(in_features=32, out_features=1, bias=True)
)
"""
from torchkeras import summary

summary(net, input_shape=(3, 32, 32));

"""
--------------------------------------------------------------------------
Layer (type)                            Output Shape              Param #
==========================================================================
Conv2d-1                            [-1, 32, 30, 30]                  896
MaxPool2d-2                         [-1, 32, 15, 15]                    0
Conv2d-3                            [-1, 64, 11, 11]               51,264
MaxPool2d-4                           [-1, 64, 5, 5]                    0
Dropout2d-5                           [-1, 64, 5, 5]                    0
AdaptiveMaxPool2d-6                   [-1, 64, 1, 1]                    0
Flatten-7                                   [-1, 64]                    0
Linear-8                                    [-1, 32]                2,080
ReLU-9                                      [-1, 32]                    0
Linear-10                                    [-1, 1]                   33
==========================================================================
Total params: 54,273
Trainable params: 54,273
Non-trainable params: 0
--------------------------------------------------------------------------
Input size (MB): 0.011719
Forward/backward pass size (MB): 0.359627
Params size (MB): 0.207035
Estimated Total Size (MB): 0.578381
--------------------------------------------------------------------------
"""

2.使用nn.Sequential按层顺序构建模型

使用nn.Sequential按层顺序构建模型无需定义forward方法。仅仅适用于简单的模型。

以下是使用nn.Sequential搭建模型的一些等价方法。

  • 1.利用add_module方法
net = nn.Sequential()
net.add_module("conv1", nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3))
net.add_module("pool1", nn.MaxPool2d(kernel_size=2, stride=2))
net.add_module("conv2", nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5))
net.add_module("pool2", nn.MaxPool2d(kernel_size=2, stride=2))
net.add_module("dropout", nn.Dropout2d(p=0.1))
net.add_module("adaptive_pool", nn.AdaptiveMaxPool2d(1, 1))
net.add_module("flatten", nn.Flatten())
net.add_module("linear1", nn.Linear(64, 32))
net.add_module("relu", nn.ReLU())
net.add_module("linear2", nn.Linear(32, 1))
print(net)

"""
Sequential(
  (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
  (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
  (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (dropout): Dropout2d(p=0.1, inplace=False)
  (adaptive_pool): AdaptiveMaxPool2d(output_size=1)
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear1): Linear(in_features=64, out_features=32, bias=True)
  (relu): ReLU()
  (linear2): Linear(in_features=32, out_features=1, bias=True)
)
"""
  • 2.利用变长参数,这种方式构建时不能给每个层指定名称
net = nn.Sequential(
    nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
    nn.MaxPool2d(kernel_size=2, stride=2),
    nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5),
    nn.MaxPool2d(kernel_size=2, stride=2),
    nn.Dropout2d(p=0.1),
    nn.AdaptiveMaxPool2d((1, 1)),
    nn.Flatten(),
    nn.Linear(64, 32),
    nn.ReLU(),
    nn.Linear(32, 1)
)

print(net)

"""
Sequential(
  (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
  (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
  (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (4): Dropout2d(p=0.1, inplace=False)
  (5): AdaptiveMaxPool2d(output_size=(1, 1))
  (6): Flatten(start_dim=1, end_dim=-1)
  (7): Linear(in_features=64, out_features=32, bias=True)
  (8): ReLU()
  (9): Linear(in_features=32, out_features=1, bias=True)
)
"""
  • 3.利用OrderedDict
from collections import OrderedDict

net = nn.Sequential(OrderedDict([
    ("conv1", nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3)),
    ("pool1",nn.MaxPool2d(kernel_size = 2,stride = 2)),
    ("conv2",nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5)),
    ("pool2",nn.MaxPool2d(kernel_size = 2,stride = 2)),
    ("dropout",nn.Dropout2d(p = 0.1)),
    ("adaptive_pool",nn.AdaptiveMaxPool2d((1,1))),
    ("flatten",nn.Flatten()),
    ("linear1",nn.Linear(64,32)),
    ("relu",nn.ReLU()),
    ("linear2",nn.Linear(32,1))
]))

print(net)

"""
Sequential(
  (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
  (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
  (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (dropout): Dropout2d(p=0.1, inplace=False)
  (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear1): Linear(in_features=64, out_features=32, bias=True)
  (relu): ReLU()
  (linear2): Linear(in_features=32, out_features=1, bias=True)
)
"""

from torchkeras import summary

summary(net, input_shape=(3, 32, 32));

"""
--------------------------------------------------------------------------
Layer (type)                            Output Shape              Param #
==========================================================================
Conv2d-1                            [-1, 32, 30, 30]                  896
MaxPool2d-2                         [-1, 32, 15, 15]                    0
Conv2d-3                            [-1, 64, 11, 11]               51,264
MaxPool2d-4                           [-1, 64, 5, 5]                    0
Dropout2d-5                           [-1, 64, 5, 5]                    0
AdaptiveMaxPool2d-6                   [-1, 64, 1, 1]                    0
Flatten-7                                   [-1, 64]                    0
Linear-8                                    [-1, 32]                2,080
ReLU-9                                      [-1, 32]                    0
Linear-10                                    [-1, 1]                   33
==========================================================================
Total params: 54,273
Trainable params: 54,273
Non-trainable params: 0
--------------------------------------------------------------------------
Input size (MB): 0.011719
Forward/backward pass size (MB): 0.359627
Params size (MB): 0.207035
Estimated Total Size (MB): 0.578381
--------------------------------------------------------------------------
"""

3.继承nn.Module基类构建模型并辅助应用模型容器进行封装

当模型的结构比较复杂时,我们可以应用模型容器(nn.Sequential, nn.ModuleList, nn.ModuleDict)对模型的部分结构进行封装。

这样做会让模型整体更加有层次干,有时候也能减少代码量。

注意,在下面的范例中,我们每次仅仅使用一种模型容器,但实际上这些模型容器的使用是非常灵活的,可以在一个模型中任意组合任意嵌套使用。

  • nn.Sequential作为模型容器
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Dropout2d(p=0.1),
            nn.AdaptiveMaxPool2d((1, 1))
        )
        self.dense = nn.Sequential(
            nn.Flatten(),
            nn.Linear(64, 32),
            nn.ReLU(),
            nn.Linear(32, 1)
        )

    def forward(self, x):
        x = self.conv(x)
        y = self.dense(x)
        return y
    
net = Net()
print(net)

"""
Net(
  (conv): Sequential(
    (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
    (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): Dropout2d(p=0.1, inplace=False)
    (5): AdaptiveMaxPool2d(output_size=(1, 1))
  )
  (dense): Sequential(
    (0): Flatten(start_dim=1, end_dim=-1)
    (1): Linear(in_features=64, out_features=32, bias=True)
    (2): ReLU()
    (3): Linear(in_features=32, out_features=1, bias=True)
  )
)
"""
  • nn.ModuleList作为模型容器,注意,下面的ModuleList不能使用Python中的列表代替
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.layers = nn.ModuleList([
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Dropout2d(p=0.1),
            nn.AdaptiveMaxPool2d((1, 1)),
            nn.Flatten(),
            nn.Linear(64, 32),
            nn.ReLU(),
            nn.Linear(32, 1)
        ])

    def forward(self, x):
        for layer in self.layers:
            x = layer(x)
        return x
    
net = Net()
print(net)

"""
Net(
  (layers): ModuleList(
    (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
    (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): Dropout2d(p=0.1, inplace=False)
    (5): AdaptiveMaxPool2d(output_size=(1, 1))
    (6): Flatten(start_dim=1, end_dim=-1)
    (7): Linear(in_features=64, out_features=32, bias=True)
    (8): ReLU()
    (9): Linear(in_features=32, out_features=1, bias=True)
  )
)
"""

from torchkeras import summary

summary(net, input_shape=(3, 32, 32));

"""
--------------------------------------------------------------------------
Layer (type)                            Output Shape              Param #
==========================================================================
Conv2d-1                            [-1, 32, 30, 30]                  896
MaxPool2d-2                         [-1, 32, 15, 15]                    0
Conv2d-3                            [-1, 64, 11, 11]               51,264
MaxPool2d-4                           [-1, 64, 5, 5]                    0
Dropout2d-5                           [-1, 64, 5, 5]                    0
AdaptiveMaxPool2d-6                   [-1, 64, 1, 1]                    0
Flatten-7                                   [-1, 64]                    0
Linear-8                                    [-1, 32]                2,080
ReLU-9                                      [-1, 32]                    0
Linear-10                                    [-1, 1]                   33
==========================================================================
Total params: 54,273
Trainable params: 54,273
Non-trainable params: 0
--------------------------------------------------------------------------
Input size (MB): 0.011719
Forward/backward pass size (MB): 0.359627
Params size (MB): 0.207035
Estimated Total Size (MB): 0.578381
--------------------------------------------------------------------------
"""
  • nn.ModuleDict作为模型容器,注意,下面的ModuleDict不能用Python中的字典代替
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.layers_dict = nn.ModuleDict({
            "conv1":nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3),
            "pool1": nn.MaxPool2d(kernel_size = 2,stride = 2),
            "conv2":nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
            "pool2": nn.MaxPool2d(kernel_size = 2,stride = 2),
            "dropout": nn.Dropout2d(p = 0.1),
            "adaptive":nn.AdaptiveMaxPool2d((1,1)),
            "flatten": nn.Flatten(),
            "linear1": nn.Linear(64,32),
            "relu":nn.ReLU(),
            "linear2": nn.Linear(32,1)
        })

    def forward(self, x):
        layers = ["conv1", "pool1", "conv2", "pool2","dropout","adaptive", "flatten", "linear1", "relu", "linear2"]
        for layer in layers:
            x = self.layers_dict[layer](x)
        return x
    
net = Net()
print(net)

"""
Net(
  (layers_dict): ModuleDict(
    (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
    (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
    (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (dropout): Dropout2d(p=0.1, inplace=False)
    (adaptive): AdaptiveMaxPool2d(output_size=(1, 1))
    (flatten): Flatten(start_dim=1, end_dim=-1)
    (linear1): Linear(in_features=64, out_features=32, bias=True)
    (relu): ReLU()
    (linear2): Linear(in_features=32, out_features=1, bias=True)
  )
)
"""

标签:kernel,nn,32,模型,stride,构建,64,方法,size
From: https://www.cnblogs.com/lotuslaw/p/18301694

相关文章

  • vue子组件调用父组件方法
    父组件页面<popoverssref="pop":goodspop="goodspop"></popoverss>子组件 components:{"popoverss":()=>import('../comm/popover.vue')},方法goodspop(e){console.log(e+"----")......
  • C#面:Application builder的use和run方法有什么区别?
    这两个⽅法都在startupclass的configure⽅法⾥⾯调⽤。都是⽤来向应⽤请求管道⾥⾯添加中间件的。Use⽅法可以调⽤下⼀个中间件的添加,⽽run不会。在C#中,Applicationbuilder是用于构建和配置应用程序的类。它提供了一些方法来设置应用程序的各种属性和行为。其中,use和run方......
  • 调用大模型API帮我分析并写可执行代码
    本文以博主自己的一个具体任务为例,记录调用大模型来辅助设计奖励函数的过程。注1:博主的目标是在强化学习过程中(CARLA环境十字路口进行自动驾驶决策控制),通过调用大模型API进行奖励函数设计,进而生成可执行的奖励函数代码,并完成自动调用。以大模型具备的丰富知识,辅助进行奖励设计......
  • 面向1-类和对象-方法、new的定义和使用
    面向对象编程OOP面向面向对象和面向过程面向对象oop-分类的思维方式-本质——以类的方式组织代码,以对象的组织(封装)数据抽像+封装+继承+多态认识上-先有对象(具体的事物)再有类(对对象的抽象)代码上-先有对象再有类(类是对象的模板)面向过程-线性步骤分析方式类-描绘一系列事物的......
  • Java中的内存模型详解
    Java中的内存模型详解大家好,我是微赚淘客系统3.0的小编,是个冬天不穿秋裤,天冷也要风度的程序猿!Java内存模型概述Java内存模型(JavaMemoryModel,JMM)定义了Java程序中多线程并发访问共享变量的规范,确保多线程间的内存可见性、原子性和有序性。理解Java内存模型对于编写并发安全的......
  • GPT4.0开通方法
    很多人还在用虚拟卡开通官网的gpt却不知道国内已经出现很多反代官网的镜像站了今天我就给大家推荐一个我经常用的而且他一个月只需要24.9https://gpt.bpjgpt.top/开通步骤......
  • 2024年7月11日实测,可用ChatGPT的方法!!!!!!!!!
    直接上干货已经成为了我得习惯☆直达地址推荐用火狐浏览器这样实测进去的速度比较快并且无需魔法哦(而且这里面的gptplus会员才24.9实测真的很完美啊!!!!!!!!!)......
  • 锂离子电池BMS各种均衡模型汇总
    基于模糊控制的buck-boost电池均衡仿真模型【闲鱼】https://m.tb.cn/h.gi2kUQh?tk=gfIg3YQBY2OHU7632「我在闲鱼发布了【基于模糊控制的buck-boost电池均衡仿真模型】」点击链接直接打开双层锂离子电池均衡模型【闲鱼】https://m.tb.cn/h.giIKCKT?tk=MkSe3YQBuxZCZ......
  • 评价类模型-层次分析法
    该博客为个人学习清风建模的学习笔记,部分课程可以在B站:【强烈推荐】清风:数学建模算法、编程和写作培训的视频课程以及Matlab等软件教学_哔哩哔哩_bilibili完整课程可以在公众号“数学建模学习交流”付费获得。目录1模型介绍1.1引入模型1.2提出问题1.3解决问题 1.4判断......
  • Transformer模型:intra-attention mask实现
    前言    这是对Transformer模型WordEmbedding、PostionEmbedding、Encoderself-attentionmask内容的续篇。视频链接:20、Transformer模型Decoder原理精讲及其PyTorch逐行实现_哔哩哔哩_bilibili文章链接:Transformer模型:WordEmbedding实现-CSDN博客     ......