首页 > 其他分享 >CNN练习汇总

CNN练习汇总

时间:2023-06-28 21:56:55浏览次数:46  
标签:layers 练习 汇总 same padding test tf CNN size

1.手写数字识别

image
image
image
image

加载数据:

import tensorflow as tf
import pandas as pd
from  tensorflow.keras import layers, optimizers, datasets, Sequential
from keras.utils.np_utils import to_categorical
import matplotlib.pyplot as plt

train = pd.read_csv("./dataset/train.csv")
test = pd.read_csv("./dataset/test.csv")
train.head()
train.shape,test.shape

image
数据处理

y=train['label']
x=train.drop(columns = ['label'])
y.shape
x.shape

image
数据归一化

tf.reduce_max(x),tf.reduce_min(x)
# 数据归一化,无量纲化
x = x / 255.0
test = test / 255.0
tf.reduce_max(x),tf.reduce_min(x)

image

分割数据集

x = x.values.reshape(-1,28,28,1)
test = test.values.reshape(-1,28,28,1)

from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(x, y, test_size=0.1, random_state=10)
x_train.shape, x_test.shape, y_train.shape, y_test.shape

image

转化成onehot编码
这两种方法都行

#y_train = to_categorical(y_train, num_classes = 10)
#y_test = to_categorical(y_test, num_classes = 10)
y_train=tf.one_hot(y_train, depth=10)
y_test=tf.one_hot(y_test, depth=10)

模型定义

model = Sequential([ # 5 units of conv + max pooling
    # unit 1:
    layers.Conv2D(6, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(6, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit 2
    layers.Conv2D(16, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(16, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    layers.Flatten(),
    layers.Dense(120, activation=tf.nn.relu),
    layers.Dropout(0.25),
    layers.Dense(84, activation=tf.nn.relu),
    layers.Dropout(0.25),
    layers.Dense(10,activation = "softmax"),
	#注意最后一层是"softmax"
])
model.build(input_shape=[None, 28, 28, 1])
model.summary()

image

compile和fit

model.compile(optimizer="Adamax",
                  loss="categorical_crossentropy", metrics=["accuracy"])

optimizer可以选择
SGD#
RMSprop
Adam
Adadelta
Adagrad
Adamax
Nadam
Ftrl
这个具体用法可以看这个中文官网

其中这个loss='categorical_crossentropy'这个是分类交叉熵函数
mean_squared_error:均方误差
categorical_crossentropy:分类交叉熵
binary_crossentropy:二元交叉熵
sparse_categorical_crossentropy:稀疏分类交叉熵
mean_absolute_error:平均绝对误差
hinge:hinge损失函数
squared_hinge:平方hinge损失函数
cosine_proximity:余弦相似度损失函数

epochs = 10
batch_size = 64
history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
                        validation_data=(x_test, y_test))

image
作图

xx = range(1, len(history.history['accuracy']) + 1)
plt.plot(xx, history.history['accuracy'])
plt.plot(xx, history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.xticks(xx)
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()

image

预测结果保存

import numpy as np
#模型预测并保存预测结果到predict_result.csv  ----5分
results = model.predict(test)
results = np.argmax(results,axis = 1)

results = pd.Series(results,name="Label")
submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)

submission.to_csv("predict_result.csv",index=False)

2 cifar100

import  os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

import  tensorflow as tf
from    tensorflow.keras import layers, optimizers, datasets, Sequential
from keras.utils.np_utils import to_categorical
import matplotlib.pyplot as plt
tf.random.set_seed(2345)


# 设置采用GPU训练程序
gpus = tf.config.list_physical_devices("GPU")  # 获取电脑GPU列表
if gpus:  # gpus不为空
    gpu0 = gpus[0]  # 选取GPU列表中的第一个
    tf.config.experimental.set_memory_growth(gpu0, True)  # 设置GPU显卡按需使用
    tf.config.set_visible_devices([gpu0], "GPU")  # 设置GPU可见的设备清单,默认是都可见,这里只设置了gpu0可见


model = Sequential([ # 5 units of conv + max pooling
    # unit 1:32
    layers.Conv2D(64, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(64, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.BatchNormalization(),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit 2:16
    layers.Conv2D(128, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(128, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.BatchNormalization(),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit 3:8
    layers.Conv2D(256, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(256, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.BatchNormalization(),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit 4:4
    layers.Conv2D(512, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(512, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.BatchNormalization(),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit 5:2
    layers.Conv2D(512, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.Conv2D(512, kernel_size=[3, 3], padding="same", activation=tf.nn.relu),
    layers.BatchNormalization(),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    layers.Flatten(),
    layers.Dense(256, activation=tf.nn.relu),
    layers.Dropout(0.5),
    layers.Dense(128, activation=tf.nn.relu),
    layers.Dropout(0.5),
    layers.Dense(100, activation='softmax'),
])



def preprocess(x, y):
    # [0~1]
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)
    return x,y


(x,y), (x_test, y_test) = datasets.cifar100.load_data()


y = tf.squeeze(y, axis=1)
y_test = tf.squeeze(y_test, axis=1)
print(x.shape, y.shape, x_test.shape, y_test.shape)

y = to_categorical(y, num_classes = 100)

y_test = to_categorical(y_test, num_classes = 100)

print(x.shape, y.shape, x_test.shape, y_test.shape)


def main():
    # # 这里一定不要忘了
    model.build(input_shape=[None, 32, 32, 3])
    model.summary()

    model.compile(optimizer="Adamax",
                  loss="categorical_crossentropy", metrics=["accuracy"])

    epochs = 10
    batch_size = 64
    history = model.fit(x, y, epochs=epochs, batch_size=batch_size,
                        validation_data=(x_test, y_test))

    xx = range(1, len(history.history['accuracy']) + 1)
    plt.plot(xx, history.history['accuracy'])
    plt.plot(xx, history.history['val_accuracy'])
    plt.title('Model accuracy')
    plt.ylabel('Accuracy')
    plt.xlabel('Epoch')
    plt.xticks(xx)
    plt.legend(['Train', 'Val'], loc='upper left')
    plt.show()



if __name__ == '__main__':
    main()

标签:layers,练习,汇总,same,padding,test,tf,CNN,size
From: https://www.cnblogs.com/lipu123/p/17512363.html

相关文章

  • 牛客练习赛112 B qsgg and Subarray
    这里介绍两种解法,贪心和二分核心:只要某一个区间和为0,则所有包含该区间的和都为0贪心根据题意是求出所有⊕和为0的子区间的个数,我们按a[i]来分类,每次求出以a[i]为末尾,区间和为0的区间个数,对于a[i]来说,要想u~i的区间和为0,则需要包含所有a[i]中位为1都有0与之对应,如果u~i的区间和......
  • 图书商城Vue+Element+Node项目练习(...)
    本系列文章是为学习Vue的项目练习笔记,尽量详细记录一下一个完整项目的开发过程。面向初学者,本人也是初学者,搬砖技术还不成熟。项目在技术上前端为主,包含一些后端代码,从基础的数据库(Sqlite)、到后端服务Node.js(Express),再到Web端的Vue,包含服务端、管理后台、商城网站、小程序/App,分......
  • Zabbix“专家坐诊”第197期问答汇总
    问题一Q:Agent6安装报错:kerneltoooldSegmentationfault,该如何解决呢?A:找到对应内核版本的旧版agent安装就好,zabbix兼容低于server版本的agent的。https://www.zabbix.com/cn/download_agents?version=6.4&release=6.4.3&os=Windows&os_version=Any&hardware=amd64&encryption=Ope......
  • 科普 涨知识类 网站合集大汇总!【暑期熊孩子必备】
    科普  涨知识类 网站合集大汇总!【暑期熊孩子必备】【暑期熊孩子必备】各类有趣又能涨知识的站点,希望大家可以在这个信息泛滥的时代学到一点属于自己的东西,也可以给自己的孩子用啊。知识科普微科普  https://www.wkepu.com/知道日报https://zhidao.baidu.com/生物谷https://ww......
  • 怎么将所查数据进行汇总成一行::
    selects.name,s.dept_namefromdepartmentdleftouterjoinstudentsond.dept_name=s.dept_nameorderbybuilding; SELECTs.dept_name,GROUP_CONCAT(s.nameSEPARATOR',')namefromdepartmentdleftouterjoinstudentsond.dept_name=s.dept_n......
  • 利用Pytorch实现Faster R-CNN
    代码解析: Pytorchtorchvision构建Faster-rcnn(一)----coco数据读取Pytorchtorchvision构建Faster-rcnn(二)----基础网络Pytorchtorchvision构建Faster-rcnn(三)----RPNPytorchtorchvision构建Faster-rcnn(四)----ROIHead训练模型:BaiduCloud 附加Pytorch源码:https://github.com/chen......
  • XXLjob分片策略、阻塞处理策略知识汇总
    一、路由策略-分片策略场景描述一般在集群环境下,我们job被部署了多个节点,xxl-job需要做到只要有一个节点去执行job,这时候需要依赖xxl-job的任务路由策略进行分配节点。xxl-job提供的路由策略有:第一个、最后一个、轮询、随机、一致性HASH、最不经常使用、最近最久未使用、故障转移......
  • 精选Golang高频面试题和答案汇总
    大家好,我是阳哥。之前写的《GO必知必会面试题汇总》,已经阅读破万,收藏230+。也欢迎大家收藏、转发本文。这篇文章给大家整理了17道Go语言高频面试题和答案详解,每道题都给出了代码示例,方便大家更好的理解。1.并发安全性Go语言中的并发安全性是什么?如何确保并发安全性?解答:......
  • 每日一练 | 华为认证真题练习Day67
    1、网络管理工作站通过SNMP协议管理网络设备,当被管理设备有异常发生时,网络管理工作站将会收到哪种SNMP报文?A.get-response报文B.trap报文C.set-request报文D.get-request报文2、路由器在转发IPv6报文时,不需要对数据链路层重新封装。A.对B.错3、在一个广播型网络中存在4台路由......
  • Mybatis 使用汇总(介绍,功能,连接池,日志,注解,XML映射文件)
    Mybatis介绍Mybatis功能Mybatis连接池mybatis日志Mybatis注解MybatisXML映射文件01.Mybatis是一款优秀的持久层框架(DAO),它支持自定义SQL、存储过程以及高级映射。MyBatis免除了几乎所有的JDBC代码以及设置参数和获取结果集的工作。MyBatis可以通过简单的XML......