首页 > 其他分享 >学习笔记12:图像数据增强及学习速率衰减

学习笔记12:图像数据增强及学习速率衰减

时间:2024-06-04 09:34:02浏览次数:23  
标签:loss 12 torch 0.5 学习 epoch transforms test 衰减

转自:https://www.cnblogs.com/miraclepbc/p/14360231.html

数据增强

  • 常用数据增强方法:
transforms.RandomCrop # 随机位置裁剪
transforms.CenterCrop # 中心位置裁剪
transforms.RandomHorizontalFlip(p = 1) # 随机水平翻转
transforms.RandomVerticalFlip(p = 1) # 随机上下翻转
transforms.RandomRotation # 随机旋转
transforms.ColorJitter(brighter = 1) # 明暗度
transforms.ColorJitter(contrast = 1) # 对比度
transforms.ColorJitter(saturation = 0.5) # 饱和度
transforms.ColorJitter(hue = 0.5) # 随机调整颜色
transforms.RandomGrayscale(p = 0.5) # 随机灰度化

学习速率衰减

学习速率衰减就是每经过几个epoch,学习速率就会降低,一般为指数型衰减

exp_lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size = 5, gamma = 0.9) # 每经过多少个step,衰减为原来的多少
torch.optim.lr_scheduler.MultiStepLR(optimizer, [20, 50, 80], gamma = 0.1) # 哪几个epoch时,衰减为原来的多少
torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma = 0.1) # 按照gamma的epoch次方衰减

注意,要在fit里面加一句

exp_lr_scheduler.step()

完整代码

import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms, models
import os
import shutil
%matplotlib inline

train_transform = transforms.Compose([
    transforms.Resize(224),
    transforms.RandomCrop(192),
    transforms.RandomHorizontalFlip(),
    transforms.RandomRotation(0.2),
    transforms.ColorJitter(brightness = 0.5),
    transforms.ColorJitter(contrast = 0.5),
    transforms.ToTensor(),
    transforms.Normalize(mean = [0.5, 0.5, 0.5], std = [0.5, 0.5, 0.5])
])
test_transform = transforms.Compose([
    transforms.Resize((192, 192)),
    transforms.ToTensor(),
    transforms.Normalize(mean = [0.5, 0.5, 0.5], std = [0.5, 0.5, 0.5])
])
train_ds = datasets.ImageFolder(
    "E:/datasets2/29-42/29-42/dataset2/4weather/train",
    transform = train_transform
)
test_ds = datasets.ImageFolder(
    "E:/datasets2/29-42/29-42/dataset2/4weather/test",
    transform = test_transform
)
train_dl = torch.utils.data.DataLoader(train_ds, batch_size = 8, shuffle = True)
test_dl = torch.utils.data.DataLoader(test_ds, batch_size = 8)

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

model = models.vgg16(pretrained = True)
for p in model.features.parameters():
    p.requries_grad = False
model.classifier[-1].out_features = 4
model.to(device)

optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)
epochs = 20
loss_func = torch.nn.CrossEntropyLoss()

exp_lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 
                                                   step_size = 5,
                                                   gamma = 0.9)

def fit(epoch, model, trainloader, testloader):
    correct = 0
    total = 0
    running_loss = 0
    
    model.train()
    for x, y in trainloader:
        x, y = x.to(device), y.to(device)
        y_pred = model(x)
        loss = loss_func(y_pred, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        with torch.no_grad():
            y_pred = torch.argmax(y_pred, dim = 1)
            correct += (y_pred == y).sum().item()
            total += y.size(0)
            running_loss += loss.item()

    exp_lr_scheduler.step()    # 规定一个step为一个epoch
    
    epoch_acc = correct / total
    epoch_loss = running_loss / len(trainloader.dataset)
    
    test_correct = 0
    test_total = 0
    test_running_loss = 0
    
    model.eval()
    with torch.no_grad():
        for x, y in testloader:
            x, y = x.to(device), y.to(device)
            y_pred = model(x)
            loss = loss_func(y_pred, y)
            y_pred = torch.argmax(y_pred, dim = 1)
            test_correct += (y_pred == y).sum().item()
            test_total += y.size(0)
            test_running_loss += loss.item()
    epoch_test_acc = test_correct / test_total
    epoch_test_loss = test_running_loss / len(testloader.dataset)
    
    print('epoch: ', epoch, 
          'loss: ', round(epoch_loss, 3),
          'accuracy: ', round(epoch_acc, 3),
          'test_loss: ', round(epoch_test_loss, 3),
          'test_accuracy: ', round(epoch_test_acc, 3))
    
    return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc

train_loss = []
train_acc = []
test_loss = []
test_acc = []
for epoch in range(epochs):
    epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch, model, train_dl, test_dl)
    train_loss.append(epoch_loss)
    train_acc.append(epoch_acc)
    test_loss.append(epoch_test_loss)
    test_acc.append(epoch_test_acc)

结果展示

准确率有些许的提高

标签:loss,12,torch,0.5,学习,epoch,transforms,test,衰减
From: https://www.cnblogs.com/gongzb/p/18230154

相关文章

  • 学习笔记13:微调模型
    转自:https://www.cnblogs.com/miraclepbc/p/14360807.htmlresnet预训练模型resnet模型与之前笔记中的vgg模型不同,需要我们直接覆盖掉最后的全连接层先看一下resnet模型的结构:我们需要先将所有的参数都设置成requires_grad=False然后再重新定义fc层,并覆盖掉原来的。重新定义的......
  • 学习笔记8:全连接网络实现MNIST分类(torch内置数据集)
    转自:https://www.cnblogs.com/miraclepbc/p/14344935.html相关包导入importtorchimportpandasaspdimportnumpyasnpimportmatplotlib.pyplotaspltfromtorchimportnnimporttorch.nn.functionalasFfromtorch.utils.dataimportTensorDatasetfromtorch.ut......
  • 学习笔记9:卷积神经网络实现MNIST分类(GPU加速)
    转自:https://www.cnblogs.com/miraclepbc/p/14345342.html相关包导入importtorchimportpandasaspdimportnumpyasnpimportmatplotlib.pyplotaspltfromtorchimportnnimporttorch.nn.functionalasFfromtorch.utils.dataimportTensorDatasetfromtorch.ut......
  • 学习HTML
    2024-06-031.网页基本信息meta /ˈmet.ə/<!--HTML中注释格式--><!--文档规范为HTML,不标注也行因为浏览器默认规范就是HTML--><!DOCTYPEhtml><htmllang="en"><!--head标签代表网页头部--><head><!--meta描述性标签,用于描述网站的一些信息--><!-......
  • 《信息学奥赛一本通 编程启蒙C++版》3126-3130(5题)
    3126:练21.3 神奇装置信息学奥赛一本通-编程启蒙(C++版)在线评测系统练21.3神奇装置信息学奥赛一本通-编程启蒙(C++版)在线评测系统3126:练21.3神奇装置_哔哩哔哩_bilibili#include<bits/stdc++.h>usingnamespacestd;intmain(){ inta,b,c,d; cin>>a>>b>>c......
  • 机器学习中的集成学习
     ......
  • java学习日记-字符流
    字符流字符流的简介字符流不同于字节流,字符流一般用于文本的操作字符流的主要操作数据类型是char字符流的操作1.字符流是一个资源对象,在操作后需要对其进行closeReaderfr=newFileReader("文件名");Writerfw=newFileWriter("文件名");创建对象,注意writer对象若......
  • 实战营学习笔记3
    在浦语大模型的第三课《基于Internlm和LangChain构建你的知识库》中,北辰老师以其生动有趣的风格,深入浅出地讲解了RAG(RetrievalAugmentedGeneration)的基本概念,并指导我们如何利用茴香豆搭建一个RAG助手。在此之前,我阅读过一些关于大型语言模型的资料,心中一直存有一个疑惑:既......
  • 【C++初阶学习】第十二弹——stack和queue的介绍和使用
    C语言栈:数据结构——栈(C语言版)-CSDN博客C语言队列:数据结构——队列(C语言版)-CSDN博客前言:在之前学习C语言的时候,我们已经学习过栈与队列,并学习过如何使用C语言来实现栈与队列,今天,我们用C++来学习这些知识,让我们探索一下其中的新的知识点目录一、stack(栈)1.栈的概述......
  • [学习笔记]点分治
    一、主要思想很容易理解,我们将一个树以一个节点分割成若干个子树。对于这个节点,我们以一些方式统计和改变答案,然后不断地向子树递归。那应该选择哪个节点呢?显然是重心。树的重心有一个性质:所有子树的大小小于等于当前树的大小的二分之一。也就是说,这保证了递归层数\(log_2\)的......