首页 > 其他分享 >PyTorch基础知识

PyTorch基础知识

时间:2023-09-06 21:01:59浏览次数:41  
标签:__ nn self torch 基础知识 PyTorch model data

PyTorch Tutorial

PyTorch基础知识_Data

Python3中机器学习框架

dataset = MyDataset(file)
dataloader = DataLoader(dataset, batch_size = size , shuffle = True)
Training : True
Testing : False
from torch.utils.data import Dataset, DateLoader
class MyDataset(Dataset):
    def __init__(self, file): # read data & preprocess
        self.data = ...
    def __getitem__(self,index): #return one sample at a time
        return self.data[index]
    def __len__(self): #return the size of the dataset
        return len(self.data)
dataset = MyDataset(file)
dataloader = Dataloader(dataset, batch_size, shuffle = True)
shuffle : Training -> true
          Testing -> false

Tersors

High-dimensional matrices(arrays)

.shape() # show the dimension
#Directly from data (list or numpy.ndarray)
x = torch.tensor([1, -1], [-1, 1])
x = torch.from_numpy(np.array([[1, -1], [-1, 1]]))
#Tensor of constant zeros & ones 
x = torch.zeros([2, 2])
x = torch.ones([1, 2, 5])
x+y x-y y=x.pow(2) y=x.sum() y=x.mean()
#Transpose : transpose two specified dimensions
x = x.transpose(dim0,dim1) # change the dimension dim0 and dim1
#Squeeze : remove the specified dimension with length 1 
x = x.squeeze(1)
#unsqueeze expand a new dimension
x = x.unsqueeze(1)

dim in PyTorch == axis in NumPy

dimensional

Check with.shape()

Creating Tensors

  • Directly from data (list or numpy.ndarray)
x = torch.tensor([1, -1], [-1, 1])
x = torch.from_numpy(np.array([[1, -1], [-1, 1]]))
  • Tensor of constant zeros & ones
x = torch.zeros([2,2])
x = torch.ones([1, 2, 5])
  • Common Operations

addition subtraction power summation mean

  • transpose
x.shape
x.transpose(0,1)

Unsqueeze : expand a new dimension

x = x.unsqueeze(1)

Cat : conncatenate multiple tensors 合并多个矩阵

torch.cat([x, y, z], dim = 1)

Data Type: Using different data types for model and data will case errors.

32-bit -torch.float

64-bit -torch.long

Device

  • Tensors & modules will be computed with CPU by default
  • Use .to() to move tensors to appropriate devices
  • CPU
  • x = x.to('cpu') - ```py x = x.to('cuda')
  • GPU
  • check if your computer has NVIDIA GPU
  • torch.cuda.is_available() - Multiple GPUs : specify- ``` 'cuda:0', 'cuda:1', 'cuda:2',...

Cradient Calculation

import torch
# 定义一个需要求导的张量 x,并将 requires_grad 参数设置为 True 
x = torch.tensor([[1., 0.], [-1., 1.]], requires_grad=True)
# 计算 x 的平方并对其进行求和,得到张量 z
z = x.pow(2).sum()
# 对张量 z 进行反向传播,自动计算出 x 的梯度
z.backward()
# 输出 x 的梯度
print(x.grad)

torch.nn

Network Layers

  • Linear Layer (Fully-connected Layer)
  • nn.linear(in_features, out_features) #### Non-linear Activation Functions```pynn.Sigmoid()nn.ReLU()

Build your own neural network

import torch.nn as nn
class MyModel(nn.Module):
    #initialize your model & define layers
	def __init__(self):
		super(MyModel, self).__init__()
		self.net = nn.Sequential(
			nn.Linear(10, 32),
			nn.Sigmoid(),
			nn.Linear(32,1)
		)
	#compute output of your nn
	def forward(self, x):
		return self.next()

Loss Functions

  • Mean squared Error (for regression tasks)
criterion = nn.MSELoss()
  • Cross Entropy (for classification tasks) 交叉熵
criterion = nn.CrossEntropyLoss()
  • loss = criterion(model_output, expected_value) ### torch.optim- Stochastic Gradient Descent (SGD) - ```py torch.optim.SGD(model.parameters(), lr, momentum = 0)
  • For every batch of data
  • Call optimizer.zero_grad() to reset gradients of model parameters.
  • Call loss.backward() to backpropagate gradients of prediction loss
  • Call optimizer.step() to adjust model parameters

Neural Network Training Setup

dataset = MyDataSet(file)
tr_set = DataLoader(dataset, 16, shuffle = True)
model = MyModel().to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), 0.1)

Training Loop

for epoch in range(n_epochs):  # Iterate over n_epochs
    model.train()  # Set the model to training mode
    for x, y in tr_set:  # Iterate over the training set
        optimizer.zero_grad()  # Clear the gradients
        x, y = x.to(device), y.to(device)  # Move data to the device (e.g., GPU)
        pred = model(x)  # Forward pass, compute predictions
        loss = criterion(pred, y)  # Compute the loss
        loss.backward()  # Backward pass, compute gradients
        optimizer.step()  # Update the model's parameters using the gradients

Validation Loop

model.eval()  # Set the model to evaluation mode
total_loss = 0

for x, y in dv_set:  # Iterate over the validation set
    x, y = x.to(device), y.to(device)  # Move data to the device
    with torch.no_grad():  # Disable gradient computation
        pred = model(x)  # Forward pass, compute predictions
        loss = criterion(pred, y)  # Compute the loss
    total_loss += loss.cpu().item() * len(x)  # Accumulate the loss
avg_loss = total_loss / len(dv_set)  # Calculate the average loss per sample

Testing Loop

model.eval()  # Set the model to evaluation mode
preds = []

for x in tt_set:  # Iterate over the test set
    x = x.to(device)  # Move data to the device
    with torch.no_grad():  # Disable gradient computation
        pred = model(x)  # Forward pass, compute predictions
        preds.append(pred.cpu())  # Append the predictions to the list

Data, demo1

Load data :

use pandas to load a csv file

train_data = pd.read_cav('./name.csv').drop(columns=['date']).values
x_train, y_train = train_data[:,:-1], train_data[:,:-1]

Dataset

init : Read data and preproces

getitem : Return one sample at a time, In this case, one sample includes a 117 dimensional feature and a label

len : Return the size of the dataset. In this case, it is 2699

class COVID19Dataset(Dataset):
    '''
    x: np.ndarray  特征矩阵.
    y: np.ndarray  目标标签, 如果为None,则是预测的数据集
    '''
    def __init__(self, x, y=None):
        if y is None:
            self.y = y
        else:
            self.y = torch.FloatTensor(y)
        self.x = torch.FloatTensor(x)

    def __getitem__(self, idx):
        if self.y is None:
            return self.x[idx]
        return self.x[idx], self.y[idx]

    def __len__(self):
        return len(self.x)

Dataloader

train_loader = DataLoader(train_dataset, batch_size = 32, shuffle = True, pin_memory = True)

Model

class My_Model(nn.Module):
    def __init__(self, input_dim):
        super(My_Model, self).__init__()
        # TODO: 修改模型结构, 注意矩阵的维度(dimensions) 
        self.layers = nn.Sequential(
            nn.Linear(input_dim, 16),
            nn.ReLU(),
            nn.Linear(16, 8),
            nn.ReLU(),
            nn.Linear(8, 1)
        )

    def forward(self, x):
        x = self.layers(x)
        x = x.squeeze(1) # (B, 1) -> (B)
        return x

Criterion

criterion = torch.nn.MSELoss(reduction = 'mean')

Optimizer

optimizer = torch.optim.SGD(model.parameters(), lr = 1e-5, momentum = 0.9)

Training Loop

Documentation and Common Errors

Read Pytorch Tutorial

Colab(highly recommended)

标签:__,nn,self,torch,基础知识,PyTorch,model,data
From: https://blog.51cto.com/u_16189732/7390465

相关文章

  • 计算机基础知识
    计算机基础知识计算机常识电子管——>晶体管——>集成电路——>超大规模集成电路1946美国宾夕法尼亚大学第一台电子计算机ENIAC冯·诺依曼核心理论二进制存储程序五大部件(I/O,运算器ALU,存储器,控制器)图灵英国人计算机系统硬件系统速度:Ca......
  • 即时通讯技术文集(第19期):IM架构设计基础知识合集 [共13篇]
    为了更好地分类阅读52im.net总计1000多篇精编文章,我将在每周三推送新的一期技术文集,本次是第19 期。[-1-] 微信后台基于时间序的新一代海量数据存储架构的设计实践[链接] http://www.52im.net/thread-2970-1-1.html[摘要] 时隔3年,微信再次分享了基于时间序的新一代海量数据存......
  • 即时通讯技术文集(第19期):IM架构设计基础知识合集 [共13篇]
    为了更好地分类阅读52im.net总计1000多篇精编文章,我将在每周三推送新的一期技术文集,本次是第19 期。[-1-] 微信后台基于时间序的新一代海量数据存储架构的设计实践[链接] http://www.52im.net/thread-2970-1-1.html[摘要] 时隔3年,微信再次分享了基于时间序的新一代海量......
  • 《动手学深度学习 Pytorch版》 4.10 实战Kaggle比赛:预测比赛
    4.10.1下载和缓存数据集importhashlibimportosimporttarfileimportzipfileimportrequests#@saveDATA_HUB=dict()DATA_URL='http://d2l-data.s3-accelerate.amazonaws.com/'defdownload(name,cache_dir=os.path.join('..','data'......
  • 《动手学深度学习 Pytorch版》 4.7 前向传播、反向传播和计算图
    4.7.1前向传播整节理论,详见书本。4.7.2前向传播计算图整节理论,详见书本。4.7.3反向传播整节理论,详见书本。4.7.4训练神经网络整节理论,详见书本。练习(1)假设一些标量函数\(X\)的输入\(X\)是\(n\timesm\)矩阵。\(f\)相对于\(X\)的梯度的维数是多少?还是\(n......
  • 《动手学深度学习 Pytorch版》 4.8 数值稳定性和模型初始化
    4.8.1梯度消失和梯度爆炸整节理论,详见书本。梯度消失%matplotlibinlineimporttorchfromd2limporttorchasd2lx=torch.arange(-8.0,8.0,0.1,requires_grad=True)y=torch.sigmoid(x)y.backward(torch.ones_like(x))d2l.plot(x.detach().numpy(),[y.deta......
  • 《动手学深度学习 Pytorch版》 4.9 环境和分布偏移
    4.9.1分布偏移的类型整节理论,详见书本。4.9.2分布偏移示例整节理论,详见书本。4.9.3分布偏移纠正整节理论,详见书本。4.9.4学习问题的分类法整节理论,详见书本。4.9.5机器学习中的公平、责任和透明度整节理论,详见书本。练习(1)当我们改变搜索引擎的行为时会发生什么?用......
  • 《动手学深度学习 Pytorch版》 4.5 权重衰减
    4.5.1范数与权重衰减整节理论,详见书本。4.5.2高维线性回归%matplotlibinlineimporttorchfromtorchimportnnfromd2limporttorchasd2l#生成一些数据,为了使过拟合效果更明显,将维数增加到200并使用一个只包含20个样本的小训练集。n_train,n_test,num_inpu......
  • 《动手学深度学习 Pytorch版》 4.6 暂退法
    importtorchfromtorchimportnnfromd2limporttorchasd2l4.6.1重新审视过拟合整节理论,详见书本。4.6.2扰动的稳健性整节理论,详见书本。4.6.3实践中的暂退法整节理论,详见书本。4.6.4从零开始实现defdropout_layer(X,dropout):assert0<=dropout<=......
  • C#入门之基础知识
    基本结构一个C#程序主要包括以下部分:命名空间声明(Namespacedeclaration)一个classClass方法Class属性一个Main方法语句(Statements)&表达式(Expressions)注释对比于java语言,c#可以说非常相似java的package相似于c#的命名空间java的类和c#的类一样,并且对于一个c#......