首页 > 其他分享 >softmax-regression

softmax-regression

时间:2024-06-23 14:01:31浏览次数:3  
标签:acc torch iter train softmax test net regression

import torch
from d2l import torch as d2l


batch_size = 50
train_iter , test_iter = d2l.load_data_fashion_mnist(batch_size )
help(d2l.load_data_fashion_mnist)
Help on function load_data_fashion_mnist in module d2l.torch:

load_data_fashion_mnist(batch_size, resize=None)
    Download the Fashion-MNIST dataset and then load it into memory.
    
    Defined in :numref:`sec_fashion_mnist`

for X , Y in train_iter:
    print(X.shape , Y.shape)
    break
torch.Size([50, 1, 28, 28]) torch.Size([50])
input_dim = 28*28 
output_dim = 10
W = torch.normal( 0,0.1 , (input_dim , output_dim) , requires_grad = True )
b = torch.zeros( (output_dim) , requires_grad = True )
x = torch.randn((1,28*28  ))
(x@W+b).shape
torch.Size([1, 10])
28*28
784
# 测试 torch.sum() 用法
x = torch.range(0,11).reshape(2,6)  # 这里是包含11
C:\Users\陈昌明\AppData\Local\Temp\ipykernel_4784\3677883941.py:2: UserWarning: torch.range is deprecated and will be removed in a future release because its behavior is inconsistent with Python's range builtin. Instead, use torch.arange, which produces values in [start, end).
  x = torch.range(0,11).reshape(2,6)  # 这里是包含11
x
tensor([[ 0.,  1.,  2.,  3.,  4.,  5.],
        [ 6.,  7.,  8.,  9., 10., 11.]])
torch.sum(x,dim = 0) , torch.sum(x,dim = 1) 
(tensor([ 6.,  8., 10., 12., 14., 16.]), tensor([15., 51.]))
torch.sum(x,dim=0).shape , torch.sum(x,dim=0,keepdim = True).shape
(torch.Size([6]), torch.Size([1, 6]))

dim: 要牺牲的维度的位置 从0开始 即:shape:(A,B,C) dim = 1 -> (A,C)
若加入了 keepdim=True 则(A,B,C) -> (A,1,C) 即 牺牲维度补一

# softmax
def softmax(x):
    x_exp = torch.exp(x)
    
    x_sum = torch.sum(x_exp , dim = 1 , keepdim = True) 
    # 注意这里的 keepdim 或者再reshape也行
#     x_sum = torch.sum(x_exp , dim = 1 ).reshape((len(x),1)) 
    
    return x_exp/x_sum
    
x = torch.range(0,5).reshape((2,3))
C:\Users\陈昌明\AppData\Local\Temp\ipykernel_4784\4097049283.py:1: UserWarning: torch.range is deprecated and will be removed in a future release because its behavior is inconsistent with Python's range builtin. Instead, use torch.arange, which produces values in [start, end).
  x = torch.range(0,5).reshape((2,3))
x
tensor([[0., 1., 2.],
        [3., 4., 5.]])
softmax(x) ,torch.sum( softmax(x),dim = 1 )
(tensor([[0.0900, 0.2447, 0.6652],
         [0.0900, 0.2447, 0.6652]]),
 tensor([1., 1.]))
def net(X ):
    return softmax( torch.matmul(X.reshape(-1,W.shape[0]) , W)+ b )
net( torch.randn( (2,784)  ) ).shape
torch.Size([2, 10])
# loss function

def cross_entropy(y_hat , y):
    return -torch.log(y_hat[ range( len(y_hat) ) , y ]  )
text_x = torch.randn( (2,784) )
text_y = torch.tensor([1,2])
text_x.shape , text_y.shape
(torch.Size([2, 784]), torch.Size([2]))
cross_entropy( net(text_x) ,text_y )
tensor([8.5022, 5.7716], grad_fn=<NegBackward0>)
def accuracy(y_hat,y):
#     print(y_hat.shape)
    y_hat = y_hat.argmax(axis=1)# 
    return (y_hat.type(y.dtype)==y).sum()
accuracy( net( torch.randn( (2,784)  ) ), text_y )/2
tensor(0.)
class Accumulator:
    def __init__(self , n):
        self.count = [0.0]*n
        
    def add(self , *args):
        self.count = [ a+float(b) for a,b in zip(self.count , args) ]
    
    def reset(self):
        self.count = [0.0]*len(self.count)
    def __getitem__(self , index):
        return self.count[index]
def evaluate_accracy(net , data_iter):
    accu = Accumulator(2)
    with torch.no_grad():
        for X , y in data_iter:
            y_hat = net(X)
            acc = accuracy(y_hat , y)
            accu.add(acc , len(X)  )
    return accu[0]/accu[1]
evaluate_accracy(net , test_iter )
0.083
def train_epoch(net , train_iter , loss , optimizer):
    if isinstance(net , torch.nn.Module):
        net.train()
    accu = Accumulator(3)
    for X , y in train_iter:
        y_hat = net(X )
        
        l = loss(y_hat , y)
        if isinstance(optimizer , torch.optim.Optimizer):
            optimizer.zero_grad()
            l.mean().backward()
            optimizer.step()
        else:
            l.sum().backward()
            optimizer(X.shape[0])
        
        acc = accuracy( y_hat , y )
        accu.add(l.sum() , acc , len(y) )
    return accu[0]/accu[2] , accu[1]/accu[2]
lr = 0.1
def updater(batch_size):
    d2l.sgd([W,b] , lr , batch_size)
train_epoch(net , train_iter , cross_entropy , updater)
(0.6210742252667745, 0.7871166666666667)
def train_ch3(net , train_iter , test_iter ,loss , optimizer , num_epoch):
    for i in  range(num_epoch):
        l , acc = train_epoch(net , train_iter , cross_entropy , optimizer )
        test_acc = evaluate_accracy(net , test_iter)
        print(f" train loss: {l:2f} , train acc: {acc:2f} , test acc: {test_acc:2f}")
train_ch3(net , train_iter , test_iter , cross_entropy, updater , 5)
 train loss: 0.494678 , train acc: 0.829333 , test acc: 0.795800
 train loss: 0.466205 , train acc: 0.839683 , test acc: 0.827200
 train loss: 0.455145 , train acc: 0.843700 , test acc: 0.833100
 train loss: 0.443355 , train acc: 0.847467 , test acc: 0.836400
 train loss: 0.435866 , train acc: 0.848167 , test acc: 0.828300
  • 画图相关的内容都略了, 后续专门整一个notebook 画图

重点函数

  • torch.sum函数 注意dim 和keepdim 的用法 具体再本文前面有
  • 注意这里:
    • x_sum = torch.sum(x_exp , dim = 1 , keepdim = True)
    • x_sum = torch.sum(x_exp , dim = 1 ).reshape((len(x),1))
    • softmax 中要保证维度匹配, 方便进行广播 否则会出错 下有示例
a1 = torch.randn((2,4)) 
a2 = torch.randn(2,1)
a3 = torch.randn(1,4)
a4 = torch.randn(2)
a5 = torch.randn(4)
(a1/a2).shape , (a1/a3).shape
(torch.Size([2, 4]), torch.Size([2, 4]))
(a1/a4).shape  # error
---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

Cell In[280], line 1
----> 1 (a1/a4).shape  # error


RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 1
(a1/a5).shape 

简洁版

import torch
from d2l import torch as d2l


batch_size = 50
train_iter  , test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
net = torch.nn.Sequential( torch.nn.Flatten() , torch.nn.Linear(784,10) )


def init_weight(p):
    if type(p) == torch.nn.Linear:
        torch.nn.init.normal_(p.weight ,std= 0.01)
net.apply(init_weight)
Sequential(
  (0): Flatten(start_dim=1, end_dim=-1)
  (1): Linear(in_features=784, out_features=10, bias=True)
)
loss = torch.nn.CrossEntropyLoss(reduction="none")

lr = 0.1
trainer = torch.optim.SGD(net.parameters() , lr)
help(d2l.train_ch3)
Help on function train_ch3 in module d2l.torch:

train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)
    Train a model (defined in Chapter 3).
    
    Defined in :numref:`sec_softmax_scratch`

num_epoch =5
d2l.train_ch3(net,train_iter , test_iter , loss , num_epoch , trainer)

train_ch3(net,train_iter , test_iter , loss , trainer,num_epoch)
 train loss: nan , train acc: 0.825700 , test acc: 0.798500
 train loss: -4.093576 , train acc: 0.804967 , test acc: 0.783000
 train loss: -4.349966 , train acc: 0.793533 , test acc: 0.773700
 train loss: -4.517625 , train acc: 0.784267 , test acc: 0.766200
 train loss: -4.642702 , train acc: 0.779450 , test acc: 0.760700

标签:acc,torch,iter,train,softmax,test,net,regression
From: https://www.cnblogs.com/cndccm/p/18263366

相关文章

  • 【机器学习】基于Softmax松弛技术的离散数据采样
    1.引言1.1.离散数据采样的意义离散数据采样在深度学习中起着至关重要的作用,它直接影响到模型的性能、泛化能力、训练效率、鲁棒性和解释性。首先,采样方法能够有效地平衡数据集中不同类别的样本数量,使得模型在训练时能够更均衡地学习各个类别的特征,从而避免因数据不平衡导......
  • MATLAB神经网络---regressionLayer回归输出层
    回归输出层regressionLayer回归层计算回归任务的半均方误差损失。Matlab中的regressionLayer函数是一个深度学习工具箱中的函数,用于定义回归问题的损失函数层。它可用于神经网络模型的最后一层,将预测值与目标值进行比较,并计算出损失值。语法layer=regressionLayer将神......
  • 吴恩达机器学习第一课 Supervised Machine Learning Regression and Classification
    SupervisedMachineLearningRegressionandClassification第一周1.1机器学习定义1.2监督学习1.2.1回归在输入输出学习后,然后输入一个没有见过的x输出相应的y1.2.2classification有多个输出1.3无监督学习数据仅仅带有输入x,但不输出标签y,算法需要找到数据中的......
  • SoftMax 的困境:在稀疏性和多模态之间左右为难
    SoftMax是现代机器学习算法中无处不在的组成部分。它将输入向量映射到概率单纯形,并通过将概率质量集中在较大的条目上,来重新加权输入。然而,作为Argmax函数的平滑近似,SoftMax将大量的概率质量分配给其他剩余的条目,导致可解释性差和噪声。虽然稀疏性可以通过一系列SoftMa......
  • 深度学习 - softmax交叉熵损失
    示例代码importtorchfromtorchimportnn#多分类交叉熵损失,使用nn.CrossEntropyLoss()实现。nn.CrossEntropyLoss()=softmax+损失计算deftest1():#设置真实值:可以是热编码后的结果也可以不进行热编码#y_true=torch.tensor([[0,1,0],[0,0,1]......
  • Scalable Membership Inference Attacks via Quantile Regression
    我们使用以下六个分类标准:动机:隐私问题:许多研究背后的主要动机是对机器学习模型相关的隐私风险日益增长的担忧。例如,Shokri等人(2017)和Carlini等人(2022)专注于开发和改进成员推理攻击,以评估模型对隐私泄露的脆弱性。模型理解:一些研究深入了解机器学习模型的固有属性。Y......
  • 【sklearn中LinearRegression,logisticregression函数及其参数】
    文章目录前言一、sklearn中的LinearRegression1.引入库2.LinearRegression的主要参数及其解释3.LinearRegression的使用步骤(1)生成模拟数据(2)创建并训练模型(3)预测与评估二、sklearn中的LogisticRegression1.引入库2.LogisticRegression的主要参数及其解释3......
  • 01-regression
    deeplearning01-regression(1)MachineLearning让机器具备一个找函式的能力differenttypesoffunction预测-regression:要找的函式,他输出的是一个数值 分类-classfication:函式的输出,从设定好的选项里面选择当一个当作输出 创造-structuredlearning:机器学会创......
  • tf.keras实现逻辑回归和softmax多分类
    逻辑回归实现转自:https://www.cnblogs.com/miraclepbc/p/14311509.html相关库引用importtensorflowastfimportnumpyasnpimportpandasaspdimportmatplotlib.pyplotasplt%matplotlibinline加载数据data=pd.read_csv("E:/datasets/dataset/credit-a.csv",h......
  • Linear regression
    Correlationsamplecorrelationcoefficient:r,from-1to1LinearregressionAssumptions:•Theresidualsarenormallydistributedandhomeostatic•Theerrorsareindependent•TherelationshipsarelinearOutliers具体代码首先构建模型。再进行下一步的......