首页 > 其他分享 >PyTorch学习9:卷积神经网络

PyTorch学习9:卷积神经网络

时间:2024-06-16 09:03:50浏览次数:11  
标签:loss 卷积 self torch list 神经网络 epoch PyTorch size

文章目录


前言

介绍卷积神经网络的基本概念及具体实例

一、说明

1.如果一个网络由线性形式串联起来,那么就是一个全连接的网络。
2.全连接会丧失图像的一些空间信息,因为是按照一维结构保存。CNN是按照图像原始结构进行保存数据,不会丧失,可以保留原始空间信息。
3.图像卷积后仍是一个三维张量。
4.subsampling(下采样)后通道数不变,但是图像的高度和宽度变,减少数据数量,降低运算需求。
5.卷积运算示意图
在这里插入图片描述
6.padding参数:在输入外面再套圈,用0填充。
7.stride参数:做卷积操作时的步长。
8.下采样通常采用最大池化层,通道数量不变,图像宽和高改变。

二、具体实例

1.程序说明

输入尺寸为1*28*28,经过10个1*5*5的卷积操作变为10*24*24;经过2*2的最大池化变为10*12*12;经过20个10*5*5的卷积操作变为20*8*8;经过2*2的最大池化变为20*4*4;变为一维320个向量,再经过全连接层变为10个向量。

2.代码示例

代码如下(示例):

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
import pickle

# prepare dataset


# design model using class


class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
        self.pooling = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(320, 10)

    def forward(self, x):
        # flatten data from (n,1,28,28) to (n, 784)
        batch_size = x.size(0)
        x = F.relu(self.pooling(self.conv1(x)))
        x = F.relu(self.pooling(self.conv2(x)))
        x = x.view(batch_size, -1)  # -1 此处自动算出的是320
        x = self.fc(x)

        return x





# training cycle forward, backward, update


def train(epoch):
    running_loss = 0.0
    loss_s = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        loss_s += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0
    return loss_s / len(train_loader)


def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    print('accuracy on test set: %d %% ' % (100 * correct / total))
    return 100 * correct / total


if __name__ == '__main__':
    batch_size = 64
    transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])

    train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, download=True, transform=transform)
    train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
    test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, download=True, transform=transform)
    test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)

    model = Net()

    # construct loss and optimizer
    criterion = torch.nn.CrossEntropyLoss()
    optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)

    epoch_list = []
    loss_list = []
    accuracy_list = []
    for epoch in range(10):
        epoch_list.append(epoch)
        loss_lis=train(epoch)
        loss_list.append(loss_lis)
        tes=test()
        accuracy_list.append(tes)
        with open('9/epoch_list.pkl', 'wb') as f:
            pickle.dump(epoch_list, f)
        with open('9/loss_list.pkl', 'wb') as f:
            pickle.dump(loss_list, f)
        with open('9/accuracy_list.pkl', 'wb') as f:
            pickle.dump(accuracy_list, f)

画图程序如下:

import pickle
import matplotlib.pyplot as plt

with open('9/epoch_list.pkl', 'rb') as f:
    loaded_epoch_list = pickle.load(f)
with open('9/loss_list.pkl', 'rb') as f:
    loaded_loss_list = pickle.load(f)
with open('9/accuracy_list.pkl', 'rb') as f:
    loaded_acc_list = pickle.load(f)

plt.subplot(2, 1, 1)  # 创建子图,2行1列,第1个子图
plt.plot(loaded_epoch_list, loaded_loss_list)
plt.xlabel('epoch')
plt.ylabel('loss 1')


plt.subplot(2, 1, 2)  # 创建子图,2行1列,第2个子图
plt.plot(loaded_epoch_list, loaded_acc_list,'r')
plt.xlabel('epoch')
plt.ylabel('acc 1')
plt.show()

得到如下结果:
在这里插入图片描述
在这里插入图片描述

利用GPU运行的程序如下:

import torch
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
import pickle
import time
# prepare dataset


# design model using class


class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
        self.pooling = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(320, 10)

    def forward(self, x):
        # flatten data from (n,1,28,28) to (n, 784)
        batch_size = x.size(0)
        x = F.relu(self.pooling(self.conv1(x)))
        x = F.relu(self.pooling(self.conv2(x)))
        x = x.view(batch_size, -1)  # -1 此处自动算出的是320
        x = self.fc(x)

        return x





# training cycle forward, backward, update


def train(epoch):
    running_loss = 0.0
    loss_s = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        inputs, target = inputs.to(device), target.to(device)
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        loss_s += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0
    return loss_s / len(train_loader)


def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    print('accuracy on test set: %d %% ' % (100 * correct / total))
    return 100 * correct / total


if __name__ == '__main__':
    start_time = time.time()
    batch_size = 64
    transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])

    train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, download=True, transform=transform)
    train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
    test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, download=True, transform=transform)
    test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)

    model = Net()
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model.to(device)

    # construct loss and optimizer
    criterion = torch.nn.CrossEntropyLoss()
    optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)

    epoch_list = []
    loss_list = []
    accuracy_list = []
    for epoch in range(10):
        epoch_list.append(epoch)
        loss_lis=train(epoch)
        loss_list.append(loss_lis)
        tes=test()
        accuracy_list.append(tes)
        with open('9/epoch_list.pkl', 'wb') as f:
            pickle.dump(epoch_list, f)
        with open('9/loss_list.pkl', 'wb') as f:
            pickle.dump(loss_list, f)
        with open('9/accuracy_list.pkl', 'wb') as f:
            pickle.dump(accuracy_list, f)
    end_time = time.time()

    print('training time: %.2f s' % (end_time - start_time))

得到如下结果:
在这里插入图片描述

在这里插入图片描述

总结

PyTorch学习9:卷积神经网络

标签:loss,卷积,self,torch,list,神经网络,epoch,PyTorch,size
From: https://blog.csdn.net/qq_59940419/article/details/139514740

相关文章

  • 08-Pytorch GPU版详细安装过程
    1.0安装Anaconda官网:https://www.anaconda.com/安装包下载地址:https://www.anaconda.com/download#downloads安装步骤#激活base虚拟环境condaactivatebase#换源pipconfigsetglobal.index-urlhttps://pypi.tuna.tsinghua.edu.cn/simple#测试p......
  • 时序预测 | Matlab基于CFBP级联前向BP神经网络时序预测
    在Matlab中使用CFBP(CascadeForward-BackwardPropagation)级联前向BP(Backpropagation)神经网络进行时序预测可以按照以下步骤进行:准备数据:首先,准备你的时序数据。确保数据已经进行了预处理,例如归一化或标准化,以便神经网络能够更好地进行学习和预测。构建级联前向BP神经网络......
  • SCI一区 | Matlab实现NGO-TCN-BiGRU-Attention北方苍鹰算法优化时间卷积双向门控循环
    要在Matlab中实现NGO-TCN-BiGRU-Attention北方苍鹰算法进行多变量时间序列预测,需要按照以下步骤进行:准备数据:首先,准备多变量时间序列数据。确保数据已经进行了预处理,例如归一化或标准化,以便神经网络能够更好地进行学习和预测。构建NGO-TCN-BiGRU-Attention模型:根据算法的......
  • pytorch学习:安装Anaconda
    安装Anaconda网址Anaconda|TheOperatingSystemforAI然后去邮箱点下载链接(但是为最新版本)建议从下面开始:下载网址推荐历史版本:Anaconda|TheOperatingSystemforAI(更加稳定)不要太新的版本,自己需求选择系统安装鼠标右键管理员身份运行选好自己的路径,......
  • Python俄罗斯方块可操纵卷积分类 | 稀疏辨识算法 | 微分方程神经求解器
    ......
  • 第十周:使用PyTorch实现车牌识别
    ......
  • 基于BP神经网络和小波变换特征提取的烟草香型分类算法matlab仿真,分为浓香型,清香型和
    1.算法运行效果图预览    2.算法运行软件版本matlab2022a 3.部分核心程序t1=clock;%计时开始net=fitnet(54);net.trainParam.epochs=1000;......
  • 【视频讲解】LSTM神经网络模型在微博中文文本评论情感分析和股市预测应用附代码数据
    全文链接:https://tecdat.cn/?p=36471原文出处:拓端数据部落公众号分析师:ShuaiFung本文将通过视频讲解,展示如何用python的LSTM模型对中文文本评论情感分析,并结合一个TensorFlow的长短期记忆神经网络(LSTM)、指数移动平均法预测股票市场和可视化实例的代码数据,为读者提供一套完整......
  • C神经网络库:Genann
    介绍Genann是一个用C语言编写的极简开源神经网络库。它旨在易于使用和集成,整个库只包含一个源文件和一个头文件,便于移植和在C项目中使用。尽管简单,Genann提供了实现和训练前馈人工神经网络的强大工具集。Genann的特点ANSIC,无依赖:Genann用ANSIC编写,不需要任何外部库,便于在......
  • FANN-快速人工神经网络
    引言快速人工神经网络(FANN)是现代计算智能的一个重要组成部分。这些网络模拟了人脑从大量数据中学习的能力,使其在处理复杂模式时表现出色。其速度的核心在于其独特的架构,允许并行处理,类似于人脑中的神经元同时操作。FANN库为开发人员提供了一个强大的框架,用于创建和实现神经......