首页 > 其他分享 >动手学深度学习——CNN应用demo

动手学深度学习——CNN应用demo

时间:2024-05-06 10:48:00浏览次数:20  
标签:Acc ... torch demo Testing 动手 Epoch 000 CNN

CNN应用demo

CNN实现简单的手写数字识别

import torch
import torch.nn.functional as F
from torchvision import datasets,transforms
from tqdm import tqdm
torch.zeros(8)
def relu(x):
    return torch.clamp(x,min=0)

def linear(x,weight,bias):
    out = torch.matmul(x,weight) + bias.view(1,-1)
    return out

def model(x,params):
    x = F.conv2d(x,params[0],params[1],2,0)
    x = relu(x)
    x = F.conv2d(x,params[2],params[3],2,0)
    x = relu(x)
    x = x.view(-1,200)
    x = linear(x,params[4],params[5])
    return x

init_std = 0.1
params = [
    torch.randn(4,1,5,5) * init_std,
    torch.zeros(4),
    torch.randn(8,4,3,3) * init_std,
    torch.zeros(8),
    torch.randn(200,10) * init_std,
    torch.zeros(10)
]
for p in params:
    p.requires_grad = True

TRAIN_BATCH_SIZE = 100
TEST_BATCH_SIZE = 100
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        '/data',train=True,download=True,
        transform = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize((0.1307,),(0.3080,))
        ])
    ),
    batch_size = TRAIN_BATCH_SIZE,shuffle=True
)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        '/data',train=False,
        transform = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize((0.1307,),(0.3080,))
        ])
    ),
    batch_size = TEST_BATCH_SIZE,shuffle=False
)

LR = 0.1
EPOCH = 100
LOG_INTERVAL = 100

for epoch in range(EPOCH):
    for idx,(data,label) in enumerate(train_loader):
        output = model(data,params)
        loss = F.cross_entropy(output,label)
        for p in params:
            if p.grad is not None:
                p.grad.zero_()
        loss.backward()
        
        for p in params:
            p.data = p.data - LR*p.grad.data
        
        if idx % LOG_INTERVAL == 0:
            print('Epoch %03d [%03d/%03d]\tLoss:%.4f' % (epoch,idx,len(train_loader),loss.item()))
            
        correct_num = 0
        total_num = 0
        with torch.no_grad():
            for data,label in test_loader:
                output = model(data,params)
                pred = output.max(1)[1]
                correct_num += (pred==label).sum().item()
                total_num += len(data)
        acc = correct_num/total_num
        print('...Testing @ Epoch %03d\tAcc: %.4f' % (epoch,acc))
Epoch 000 [000/600]	Loss:2.3304
...Testing @ Epoch 000	Acc: 0.1093
...Testing @ Epoch 000	Acc: 0.1265
...Testing @ Epoch 000	Acc: 0.1392
...Testing @ Epoch 000	Acc: 0.1547
...Testing @ Epoch 000	Acc: 0.1753
...Testing @ Epoch 000	Acc: 0.1978
...Testing @ Epoch 000	Acc: 0.2243
...Testing @ Epoch 000	Acc: 0.2482
...Testing @ Epoch 000	Acc: 0.2802
...Testing @ Epoch 000	Acc: 0.3076
...Testing @ Epoch 000	Acc: 0.3206
...Testing @ Epoch 000	Acc: 0.3458
...Testing @ Epoch 000	Acc: 0.3649
...Testing @ Epoch 000	Acc: 0.4057
...Testing @ Epoch 000	Acc: 0.4618
...Testing @ Epoch 000	Acc: 0.4657
...Testing @ Epoch 000	Acc: 0.4729
...Testing @ Epoch 000	Acc: 0.5428
...Testing @ Epoch 000	Acc: 0.5659
...Testing @ Epoch 000	Acc: 0.5371
...Testing @ Epoch 000	Acc: 0.5344
...Testing @ Epoch 000	Acc: 0.5585
...Testing @ Epoch 000	Acc: 0.4423
...Testing @ Epoch 000	Acc: 0.6185
...
...Testing @ Epoch 000	Acc: 0.8701
...Testing @ Epoch 000	Acc: 0.8501
...Testing @ Epoch 000	Acc: 0.8750
...Testing @ Epoch 000	Acc: 0.8729

可以用GPU训练优化后的代码

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from tqdm import tqdm
class CNN(nn.Module):
    def __init__(self,in_channels=1,num_classes=10):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=8,kernel_size=(3,3),stride=(1,1),padding=(1,1))
        self.pool = nn.MaxPool2d(kernel_size=(2,2),stride=(2,2))
        self.conv2 = nn.Conv2d(in_channels=8,out_channels=16,kernel_size=(3,3),stride=(1,1),padding=(1,1))
        self.fc1 = nn.Linear(16*7*7,num_classes)
    def forward(self,x):
        x = F.relu(self.conv1(x))
        x = self.pool(x)
        x = F.relu(self.conv2(x))
        x = self.pool(x)
        x = x.reshape(x.shape[0],-1)
        x = self.fc1(x)
        return x

# Set device
device = torch.device("cuda"if torch.cuda.is_available() else "cpu")
print(device)
# Hyperparameters
in_channels = 1
num_classes = 10
learning_rate = 0.001
batch_size = 64
num_epochs = 5

# Load Data
train_dataset = datasets.MNIST(root="dataset/",train=True,transform=transforms.ToTensor(),download=True)
train_loader = DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)

test_dataset = datasets.MNIST(root="dataset/",train=False,transform=transforms.ToTensor(),download=True)
test_loader = DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)

# Initialize network
model = CNN().to(device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(),lr=learning_rate)

# Train Network

for epoch in range(num_epochs):
    # for data,targets in tqdm(train_loadr,leave=False) # 进度显示在一行
    for data,targets in tqdm(train_loader):
        # Get data to cuda if possible
        data = data.to(device=device)
        targets = targets.to(device=device)

        # forward
        scores = model(data)
        loss = criterion(scores,targets)

        # backward
        optimizer.zero_grad()
        loss.backward()

        # gardient descent or adam step
        optimizer.step()

标签:Acc,...,torch,demo,Testing,动手,Epoch,000,CNN
From: https://www.cnblogs.com/Sun-Wind/p/18174450

相关文章

  • 动手学深度学习——卷积操作
    卷积卷积概念卷积原属于信号处理中的一种运算,引入CNN中,作为从输入中提取特征的基本操作补零:在输入端外侧填补0值使得卷积输出结果满足某种大小,在外侧的每一边都添加0值,使得输出可以达到某种预定形状跨步:卷积核在输入上滑动时每次移动到下一步的距离使用张量实现卷积impor......
  • 动手学深度学习——基本张量运算
    基本张量运算张量张量可以被看做多维数组,高维矩阵,可以进行多种数据操作和数学运算importtorchtorch.tensor([[1.,-1.],[1.,-1.]])创建张量tensor([[1.,-1.],[1.,-1.]])a=torch.randn(2,3)torch.sigmoid(a)a处理张量tensor([[-0.1690,-0.2554,-0.4......
  • 基于WOA优化的CNN-GRU-Attention的时间序列回归预测matlab仿真
    1.算法运行效果图预览woa优化前      woa优化后    2.算法运行软件版本matlab2022a 3.算法理论概述      时间序列回归预测是数据分析的重要领域,旨在根据历史数据预测未来时刻的数值。近年来,深度学习模型如卷积神经网络(ConvolutionalNeur......
  • ollama + ollama web + fastapi app (langchain) demo
    ollama+ollamaweb+fastapiapp(langchain)demohttps://github.com/fanqingsong/ollama-dockerWelcometotheOllamaDockerComposeSetup!ThisprojectsimplifiesthedeploymentofOllamausingDockerCompose,makingiteasytorunOllamawithallitsd......
  • 一个demo快速理解序列号和反序列化
    一个demo快速理解序列号和反序列化分享一个例子用来快速理解序列化和反序列化其实序列化和反序列化就是为了交换数据,(简单粗暴的理解就是把运行中的对象存进文件里面)importjava.io.*;publicclassMain{publicstaticvoidmain(String[]args)throwsException{......
  • OceanBase单机版重新部署提示[ERROR] Deploy “demo“ is running. You could not dep
    执行介质里uninstall.sh脚本删除部署信息后重新安装Demo提示:[root@tidb01bin]#obddemo[ERROR]Deploy"demo"isrunning.Youcouldnotdeployanrunningcluster.Seehttps://www.oceanbase.com/product/ob-deployer/error-codes.TraceID:b18c41ba-07af-11ef-bd8f-......
  • 对接银行支付,自己的demo可以调通,放到项目里,却总提示验签失败。原来竟是因为...
    原因是字符集(charset)不一致对接一个银行支付通道的支付API,自己java写的demo可以调通,放到项目工程里,部署到环境上,总是收到验签失败的响应。这个问题,困扰我们的开发大兄弟长达一个星期。对接通道接口联调不通,常见的场景有许多,如:签名原串需要对key进行排序。不同的排序算法会导......
  • Go语言实现多协程文件上传,断点续传--demo
    packagemainimport("fmt""io""os""regexp""strconv""sync""github.com/qianlnk/pgbar")/***需求:1.多协程下载文件2.断点续连**/funcmain(){//获取要下载文件DownloadFileName:=&quo......
  • WPF所有原生空间使用demo
    //前台窗体<Windowx:Class="WpfTestDemo.MainWindow"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"xmlns:d="http://schemas.......
  • 实验14-1使用cnn完成MNIST手写体识别(tf)+实验14-2使用cnn完成MNIST手写体识别(keras)
    版本python3.7tensorflow版本为tensorflow-gpu版本2.6实验14-1使用cnn完成MNIST手写体识别(tf)运行结果: 代码:importtensorflowastf#Tensorflow提供了一个类来处理MNIST数据fromtensorflow.examples.tutorials.mnistimportinput_dataimporttime#载入数据集mn......