首页 > 其他分享 >MLP实现minist数据集分类任务

MLP实现minist数据集分类任务

时间:2024-05-06 23:27:04浏览次数:17  
标签:Training minist self 分类 test MLP np size Accuracy

1. 数据集

minist手写体数字数据集

2. 代码

'''
Description: 
Author: zhangyh
Date: 2024-05-04 15:21:49
LastEditTime: 2024-05-04 22:36:26
LastEditors: zhangyh
'''

import numpy as np

class MlpClassifier:    
    def __init__(self, input_size, hidden_size1, hidden_size2, output_size, learning_rate=0.01):
        self.input_size = input_size
        self.hidden_size1 = hidden_size1
        self.hidden_size2 = hidden_size2
        self.output_size = output_size
        self.learning_rate = learning_rate

        self.W1 = np.random.randn(input_size, hidden_size1) * 0.01
        self.b1 = np.zeros((1, hidden_size1))
        self.W2 = np.random.randn(hidden_size1, hidden_size2) * 0.01
        self.b2 = np.zeros((1, hidden_size2))
        self.W3 = np.random.randn(hidden_size2, output_size) * 0.01
        self.b3 = np.zeros((1, output_size))
    
    def softmax(self, x):
        exps = np.exp(x - np.max(x, axis=1, keepdims=True))
        return exps / np.sum(exps, axis=1, keepdims=True)
    
    def relu(self, x):
        return np.maximum(x, 0)
    
    def relu_derivative(self, x):
        return np.where(x > 0, 1, 0)
    
    def cross_entropy_loss(self, y_true, y_pred):
        m = y_true.shape[0]
        return -np.sum(y_true * np.log(y_pred + 1e-8)) / m
    
    def forward(self, X):
        self.Z1 = np.dot(X, self.W1) + self.b1
        self.A1 = self.relu(self.Z1)
        self.Z2 = np.dot(self.A1, self.W2) + self.b2
        self.A2 = self.relu(self.Z2)
        self.Z3 = np.dot(self.A2, self.W3) + self.b3
        self.A3 = self.softmax(self.Z3)
        return self.A3
    
    def backward(self, X, y):
        m = X.shape[0]
        dZ3 = self.A3 - y
        dW3 = np.dot(self.A2.T, dZ3) / m
        db3 = np.sum(dZ3, axis=0, keepdims=True) / m
        dA2 = np.dot(dZ3, self.W3.T)
        dZ2 = dA2 * self.relu_derivative(self.Z2)
        dW2 = np.dot(self.A1.T, dZ2) / m
        db2 = np.sum(dZ2, axis=0, keepdims=True) / m
        dA1 = np.dot(dZ2, self.W2.T)
        dZ1 = dA1 * self.relu_derivative(self.Z1)
        dW1 = np.dot(X.T, dZ1) / m
        db1 = np.sum(dZ1, axis=0, keepdims=True) / m
        
        # Update weights and biases
        self.W3 -= self.learning_rate * dW3
        self.b3 -= self.learning_rate * db3
        self.W2 -= self.learning_rate * dW2
        self.b2 -= self.learning_rate * db2
        self.W1 -= self.learning_rate * dW1
        self.b1 -= self.learning_rate * db1

    # 计算精确度
    def accuracy(self, y_pred, y):
        predictions = np.argmax(y_pred, axis=1)
        correct_predictions = np.sum(predictions == np.argmax(y, axis=1))    
        return correct_predictions / y.shape[0] 
    
    def train(self, X, y, epochs=100, batch_size=64):
        print('Training...')    
        m = X.shape[0]
        for epoch in range(epochs):
            for i in range(0, m, batch_size):
                X_batch = X[i:i+batch_size]
                y_batch = y[i:i+batch_size]
                
                # Forward propagation
                y_pred = self.forward(X_batch)
                
                # Backward propagation
                self.backward(X_batch, y_batch)
            
            if (epoch+1) % 10 == 0:
                loss = self.cross_entropy_loss(y, self.forward(X))
                acc = self.accuracy(y_pred, y_batch) 
                print(f'Epoch {epoch+1}/{epochs}, Loss: {loss}, Training-Accuracy: {acc}')   

    def test(self, X, y):
        print('Testing...') 
        y_pred = self.forward(X)
        acc = self.accuracy(y_pred, y)    
        return acc


if __name__ == '__main__':  

    import tensorflow as tf

    # 加载MNIST数据集
    (X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()

    # 将图像转换为向量形式
    X_train = X_train.reshape(X_train.shape[0], -1) / 255.0
    X_test = X_test.reshape(X_test.shape[0], -1) / 255.0
    # 将标签进行 one-hot 编码
    num_classes = 10
    y_train = tf.keras.utils.to_categorical(y_train, num_classes)
    y_test = tf.keras.utils.to_categorical(y_test, num_classes)

    # 打印转换后的结果
    # 训练集维度: (60000, 784) (60000, 10)
    # 测试集维度: (10000, 784) (10000, 10)
    model = MlpClassifier(784, 128, 128, 10)

    model.train(X_train, y_train)   

    test_acc = model.test(X_test, y_test)  
    print(f'Test-Accuracy: {test_acc}') 
  

  

3. 运行结果

Training...
Epoch 10/100, Loss: 0.3617846299623725, Training-Accuracy: 0.9375
Epoch 20/100, Loss: 0.1946690996652946, Training-Accuracy: 1.0
Epoch 30/100, Loss: 0.13053815227522408, Training-Accuracy: 1.0
Epoch 40/100, Loss: 0.09467908427578901, Training-Accuracy: 1.0
Epoch 50/100, Loss: 0.07120217251250453, Training-Accuracy: 1.0
Epoch 60/100, Loss: 0.055233734086591456, Training-Accuracy: 1.0
Epoch 70/100, Loss: 0.04369171830999816, Training-Accuracy: 1.0
Epoch 80/100, Loss: 0.03469674775956587, Training-Accuracy: 1.0
Epoch 90/100, Loss: 0.027861857647949812, Training-Accuracy: 1.0
Epoch 100/100, Loss: 0.0225212692988995, Training-Accuracy: 1.0
Testing...
Test-Accuracy: 0.9775

  

标签:Training,minist,self,分类,test,MLP,np,size,Accuracy
From: https://www.cnblogs.com/zhangyh-blog/p/18176187

相关文章

  • MLP实现波士顿房屋价格回归任务
    1.数据集波士顿房屋价格.csv文件,文件中的数据有可能不完整,部分数据如下:CRIM,ZN,INDUS,CHAS,NOX,RM,AGE,DIS,RAD,TAX,PTRATIO,LSTAT,MEDV0.00632,18,2.31,0,0.538,6.575,65.2,4.09,1,296,15.3,4.98,240.02731,0,7.07,0,0.469,6.421,78.9,4.9671,2,242,17.8,9.14,21.60.02......
  • 测试分类
    单元测试:针对程序的最小单元来进行正确性检验的测试工作,包括类、方法等。(严格来说,单元测试只针对【功能点】进行测试,不包括对业务流程正确性的测试)功能测试/接口测试:测试接口的功能是否正确。【接口,输入输出】端到端测试:模拟真实用户的请求(客户端--服务端),测试应用的整体链路是否......
  • R:OTU根据分类级别拆分
    输入文件输出文件rm(list=ls())setwd("C:\\Users\\Administrator\\Desktop\\microtable")#设置工作目录library(dplyr)library(tidyr)library(readr)#读取文件data<-readLines('1.txt')#定义分类等级的前缀和列名prefixes<-c("k__","......
  • 基于深度学习网络的十二生肖图像分类matlab仿真
    1.算法运行效果图预览  2.算法运行软件版本matlab2022a 3.算法理论概述      GoogLeNet主要由一系列的Inception模块堆叠而成,每个Inception模块包含多个并行的卷积层,以不同的窗口大小处理输入数据,然后将结果整合在一起。假设某一层的输入特征图表示为X∈ℝ^......
  • 解决vscode连接远程服务器出现Bad owner or permissions on C:\\Users\\Administr
    1.找到.ssh文件夹。它通常位于C:\Users2.右键单击.ssh文件夹,然后单击“属性”,选择“安全”3.单击“高级”。单击“禁用继承”,单击“确定”。将出现警告弹出窗口。单击“从此对象中删除所有继承的权限”。4.此时所有用户都将被删除。添加所有者。在同一窗口中,单击“编辑”按......
  • 火车票订票系统的用户分类
    火车票订票系统的用户主要分为以下几类:(1)普通用户:这些用户是系统的最终使用者,他们可以通过注册和登录来访问系统。普通用户可以执行以下操作:1.注册和登录:用户可以注册账号并登录系统。2.购票:用户可以购买火车票,包括直达购票和换乘购票。3.改签和退票:用户可以改变出行时间或位置......
  • 运算符的分类
    运算符的分类JS中的运算符,分类如下:算数运算符自增/自减运算符一元运算符三元运算符(条件运算符)逻辑运算符赋值运算符比较运算符下面来逐一讲解。算术运算符用于执行两个变量或值的算术运算。此外,算数运算符存在隐式类型转换的情况,前文“数据类型转换......
  • ICESat-2 从ATL08中获取ATL03分类结果
    ICESat-2ATL03数据和ATL08数据的分段距离不一致,ATL08在ATL03的基础上重新分段,并对分段内的数据做处理得到一系列的结果,详情见数据字典:ATL08ProductDataDictionary(nsidc.org)ATL08使用DRAGANN算法对ATL03数据做了去噪处理,并使用分类算法对每个光子进行分类标志值标志......
  • amCharts图像分类
    代码案例<!DOCTYPEhtml><html><head><scriptsrc="https://cdn.amcharts.com/lib/5/index.js"></script><scriptsrc="https://cdn.amcharts.com/lib/5/xy.js"></script><scriptsrc=&qu......
  • 【pytorch学习】之线性神经网络-图像分类数据集
    图像分类数据集MNIST数据集(LeCunetal.,1998)是图像分类中广泛使用的数据集之一,但作为基准数据集过于简单。我们将使用类似但更复杂的Fashion‐MNIST数据集(Xiaoetal.,2017)。%matplotlibinlineimporttorchimporttorchvisionfromtorch.utilsimportdatafromt......