首页 > 其他分享 >常用代码片段及技巧

常用代码片段及技巧

时间:2024-10-03 21:22:40浏览次数:1  
标签:loss 片段 技巧 代码 torch epoch print model tensor

目录

常用代码片段及技巧

自动选择GPU和CPU

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# model and tensor to device
vgg = models.vgg16().to(device)

切换当前目录

import os
try:
	os.chdir(os.path.join(os.getcwd(), '..'))
	print(os.getcwd())
except:
	pass

临时添加环境目录

import sys
sys.path.append('引用模块的地址')
print(sys.path)

打印模型参数

from torchsummary import summary
# 1 means in_channels
summary(model, (1, 28, 28))

将tensor的列表转换为tensor

x = torch.stack(tensor_list)

内存不够

  • Smaller batch size
  • torch.cuda.empty_cache()every few minibatches
  • 分布式计算
  • 训练数据和测试数据分开
  • 每次用完之后删去variable,采用del x

debug tensor memory

resource` module is a Unix specific package as seen in https://docs.python.org/2/library/resource.html which is why it worked for you in Ubuntu, but raised an error when trying to use it in Windows.

Here is what solved it for me.

  1. Downgrade to the Apache Spark 2.3.2 prebuild version
  2. Install (or downgrade) jdk to version 1.8.0
    • My installed jdk was 1.9.0, which doesn't seem to be compatiable with spark 2.3.2 or 2.4.0
  3. make sure that when you run java -version in cmd (command prompt), it show java version 8. If you are seeing version 9, you will need to change your system ENV PATH to ensure it points to java version 8.
  4. Check this link to get help on changing the PATH if you have multiple java version installed.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def debug_memory():
    import collections, gc, resource, torch
    print('maxrss = {}'.format(
        resource.getrusage(resource.RUSAGE_SELF).ru_maxrss))
    tensors = collections.Counter((str(o.device), o.dtype, tuple(o.shape))
                                  for o in gc.get_objects()
                                  if torch.is_tensor(o))
    for line in sorted(tensors.items()):
        print('{}\t{}'.format(*line))
        
        
 # example
import tensor
 x = torch.tensor(3,3)
 debug_memory()
 
 y = torch.tensor(3,3)
 debug_memory()
 
 z = [torch.randn(i).long() for i in range(10)]
 debug_memory()

10-18-2019


Matlab绘虚线图

%matplotlib inline
from matplotlib import pyplot as plt
from IPython import display
import torch
import math

x = torch.arange(-7, 7, 0.01)
# Mean and variance pairs
parameters = [(0,1), (0,2), (3,1)]

# Display SVG rather than JPG
display.set_matplotlib_formats('svg')
plt.figure(figsize=(10, 6))
for (mu, sigma) in parameters:
    p = (1/math.sqrt(2 * math.pi * sigma**2)) * torch.exp(-(0.5/sigma**2) * (x-mu)**2)
    plt.plot(x.numpy(), p.numpy(), label='mean ' + str(mu) + ', variance ' + str(sigma))
plt.axhline(y=0, color='black', linestyle='dashed')
plt.legend()
plt.show()

loss训练代码(训练集与验证集)

lr = 0.03  # Learning rate
num_epochs = 3  # Number of iterations
net = linreg  # Our fancy linear model
loss = squared_loss  # 0.5 (y-y')^2

for epoch in range(num_epochs):
    # Assuming the number of examples can be divided by the batch size, all
    # the examples in the training data set are used once in one epoch
    # iteration. The features and tags of mini-batch examples are given by X
    # and y respectively
    for X, y in data_iter(batch_size, features, labels):
        l = loss(net(X, w, b), y)  # Minibatch loss in X and y
        l.mean().backward()  # Compute gradient on l with respect to [w,b]
        sgd([w, b], lr, batch_size)  # Update parameters using their gradient
    with torch.no_grad():
        train_l = loss(net(features, w, b), labels)
        print('epoch %d, loss %f' % (epoch + 1, train_l.mean().numpy()))

保存最佳模型

def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
    since = time.time() # 计时开始

    best_model_wts = model.state_dict() # 读取训练好的模型权重
    best_acc = 0.0

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)

        # 每个epoch中游训练和验证部分
        for phase in ['train', 'val']:
            if phase == 'train':
                scheduler.step()
                model.train(True)  
            else:
                model.train(False)  

            running_loss = 0.0
            running_corrects = 0


            for data in dataloaders[phase]:

                inputs, labels = data

                # 如果使用GPU,则使用Variable
                if use_gpu:
                    inputs = Variable(inputs.cuda())
                    labels = Variable(labels.cuda())
                else:
                    inputs, labels = Variable(inputs), Variable(labels)

                # 初始化梯度值
                optimizer.zero_grad()

                # 前向
                outputs = model(inputs)
                _, preds = torch.max(outputs.data, 1)
                loss = criterion(outputs, labels)

                # 后向,如果为训练集则进行梯度优化
                if phase == 'train':
                    loss.backward()
                    optimizer.step()

                # 统计损失
                running_loss += loss.data[0]
                running_corrects += torch.sum(preds == labels.data)

            epoch_loss = running_loss / dataset_sizes[phase]
            epoch_acc = running_corrects / dataset_sizes[phase]

            print('{} Loss: {:.4f} Acc: {:.4f}'.format(
                phase, epoch_loss, epoch_acc))

            # 深度复制该模型
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = model.state_dict()

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(
        time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))

    # 载入最佳的模型
    model.load_state_dict(best_model_wts)
    return model

标签:loss,片段,技巧,代码,torch,epoch,print,model,tensor
From: https://www.cnblogs.com/memokeerbisi/p/18446017

相关文章

  • Python异常处理:让你的代码更稳健的魔法
    引言:你是否曾经在代码中迷失?想象一下,你正在编写一个重要的Python程序,突然间,屏幕上弹出一条错误信息,仿佛一只无形的手将你的努力撕得粉碎。你是否曾经感到无助,甚至想要放弃?根据统计,程序员在开发过程中,约有70%的时间都在处理错误和异常。可见,异常处理不仅是编程的“必修课”,更是......
  • 鸿蒙应用示例:应用开发中的动态获取属性与调用方法技巧
    随着HarmonyOS的发展,API版本的更新带来了许多新的特性和限制。在API11及以后的版本中,直接赋值对象的语法不再被支持,这要求开发者们采用新的方式来处理对象的创建和属性的访问。同时,HarmonyOS支持ETS(EnhancedTypeScript)文件,这是一种扩展了TypeScript的文件格式,用于更好地支持Harmo......
  • 数据表或视图不存在 [错误代码]SQLSTATE[42S02]: Base table or view not found: 1146
    这个错误表明在执行SQL查询时,尝试访问的数据表或视图 ey_product_content 在数据库 bb9e8d602 中不存在。这可能是由于以下几个原因导致的:表名拼写错误:检查表名是否正确无误。数据库选择错误:确认当前使用的数据库是否正确,确保没有混淆数据库名称。表被删除:可能该表已经......
  • 代码随想录算法训练营Day2|209.长度最小的子数组 59.螺旋矩阵
    学习资料:https://programmercarl.com/数组总结篇.html#数组的经典题目移动窗格,首尾指针根据条件变化模拟行为,循环不变量(左闭右闭或左闭右开)整个过程保持一致学习记录:209.长度最小的子数组(用while使得尾指针遍历全部;用while实现,当[首:尾]之和>目标值,才移动首指针;为了求最小长度......
  • git 代码提交规范 commitLink
    commitLink是一个git代码提交规范工具,能规范团队成员代码必须按照规范提交1、安装依赖:npminstall--save-dev@commitlint/config-conventional@commitlint/cli依赖安装完成之后,会生成一个commitLink.config.js配置文件 2、安装kusky (mpninstall.husky/com......
  • 代码随想录算法训练营 | 122.买卖股票的最佳时机II,55. 跳跃游戏,45.跳跃游戏II,1005.K次
    122.买卖股票的最佳时机II题目链接:122.买卖股票的最佳时机II文档讲解︰代码随想录(programmercarl.com)视频讲解︰买卖股票的最佳时机II日期:2024-10-03想法:本来还在想什么时候买股票,结果只需要考虑每天的正收益累加就是最大的收益了。Java代码如下:classSolution{public......
  • Java毕业设计:基于Springboo汽车故障维修预约网站毕业设计源代码作品和开题报告
     博主介绍:黄菊华老师《Vue.js入门与商城开发实战》《微信小程序商城开发》图书作者,CSDN博客专家,在线教育专家,CSDN钻石讲师;专注大学生毕业设计教育和辅导。所有项目都配有从入门到精通的基础知识视频课程,学习后应对毕业设计答辩。项目配有对应开发文档、开题报告、任务书、P......
  • Linux运维常见故障排查和处理的技巧汇总
    常见问题解决集锦1.shell脚本不执行问题:某天研发某同事找我说帮他看看他写的shell脚本,死活不执行,报错。我看了下,脚本很简单,也没有常规性的错误,报“:badinterpreter:Nosuchfileordirectory”错。看这错,我就问他是不是在windows下编写的脚本,然后在上传到linux服务器的……......
  • 总结28个令人惊艳的JavaScript单行代码
    1.阶乘计算使用递归函数计算给定数字的阶乘。12constfactorial=n=>n===0?1:n*factorial(n-1);console.log(factorial(5));//输出120 2.判断一个变量是否为对象类型1constisObject=variable===Object(variable);......
  • 代码随想录算法训练营day7|704.二分查找、27.移除元素、977.有序数组的平方
    学习资料:https://programmercarl.com/数组理论基础.html理解:双指针可以同时获取一个数组的两个位置的值二分查找:根据区间范围(左闭右闭、左闭右开)来判断左右指针比较方式刷题记录:704.二分查找(左闭右闭则<=,左右指针,middle=left+(right-left)//2,因为考虑了等号情况所以下一步l......