首页 > 其他分享 >机器学习——是否带口罩识别

机器学习——是否带口罩识别

时间:2022-12-18 00:11:08浏览次数:67  
标签:150 口罩 plt 机器 img mask model 识别 nomask

机器学习——是否带口罩识别

(一)选题背景:2019年以来,新冠状病毒的出现,给人们带来了沉重的生命财产的损失。我国对肺炎疫情期间也采取了严肃的措施,在公共场合要求人们必须全程佩戴口罩。佩戴口罩可以更好的起到预防病症出现爆发传播的情况,口罩可以有效防止细菌和病毒的入侵。并且还可以起到防止飞沫的扩散。并且新型冠状病毒感染和呼吸道飞沫传播是传播的主要途径,戴口罩可有效防止病情传播。因此我们要通过机器学习方式识别图中的人是否佩戴了口罩。

(二)机器学习设计案例设计方案:网络中下载相应的数据集并分组创建出新的数据集,在python的环境中,查看各组数据集数量,对数据进行预处理,利用keras,构建网络,训练模型,观察训练次数的精度变化, 导入图片测试模型

数据集来源:csnd,网址:数据来源入口

(三)机器学习的实现步骤:

1.下载数据集:

 2.导入需要用到的库:

 1 import os
 2 import numpy as np
 3 import os.path
 4 import matplotlib.pyplot as plt
 5 from PIL import Image
 6 from keras import layers
 7 from keras import models
 8 from keras import optimizers
 9 from keras.preprocessing.image import ImageDataGenerator
10 from keras.models import load_model
11 from keras.utils import image_utils

3.查看各组数据集数量:

1 train_path="E:/Jupyter_files/mask_and_nomask_small/train/"
2 print('total training mask images:', len(os.listdir(train_path+"mask")))
3 print('total training nomask images:', len(os.listdir(train_path+"nomask")))
4 valid_path="E:/Jupyter_files/mask_and_nomask_small/validation/"
5 print('total validation mask images:', len(os.listdir(valid_path+"mask")))
6 print('total validation nomask images:', len(os.listdir(valid_path+"nomask")))
7 test_path="E:/Jupyter_files/mask_and_nomask_small/test/"
8 print('total test images:', len(os.listdir(test_path)))

 

4.搭建网络并查看特征图的维度如何随着每层变化

 1 model = models.Sequential()
 2 #第一个卷积层作为输入层,32个3*3卷积核,输入形状input_shape = (150,150,3)
 3 # 输出图片尺寸:150-3+1=148*148,参数数量:32*3*3*3+32=896
 4 model.add(layers.Conv2D(32,(3,3),activation = 'relu',input_shape = (150,150,3)))
 5 model.add(layers.MaxPooling2D((2,2)))# 输出图片尺寸:148/2=74*74
 6 #输出图片尺寸:74-3+1=72*72,参数数量:64*3*3*32+64=18496
 7 model.add(layers.Conv2D(64,(3,3),activation = 'relu'))
 8 model.add(layers.MaxPooling2D((2,2)))# 输出图片尺寸:72/2=36*36
 9 # 输出图片尺寸:36-3+1=34*34,参数数量:128*3*3*64+128=73856
10 model.add(layers.Conv2D(128,(3,3),activation = 'relu'))
11 model.add(layers.MaxPooling2D((2,2)))# 输出图片尺寸:34/2=17*17
12 model.add(layers.Conv2D(128,(3,3),activation = 'relu'))
13 model.add(layers.MaxPooling2D((2,2)))
14 model.add(layers.Flatten())
15 model.add(layers.Dense(512,activation = 'relu'))
16 model.add(layers.Dense(1,activation = 'sigmoid'))#sigmoid分类,输出是两类别
17 #看一下特征图的维度如何随着每层变化
18 model.summary()

5.搭建模型建立数据和验证数据

 1 from keras import optimizers
 2 #图像在输入神经网络之前进行数据处理,建立训练和验证数据
 3 from keras.preprocessing.image import ImageDataGenerator
 4 model.compile(loss='binary_crossentropy',
 5               optimizer=optimizers.RMSprop(lr=1e-4),
 6               metrics=['acc'])
 7 #归一化
 8 train_datagen = ImageDataGenerator(rescale = 1./255)
 9 test_datagen = ImageDataGenerator(rescale = 1./255)
10 train_dir = 'E:/Jupyter_files/mask_and_nomask_small/train'     #指向训练集图片目录路径
11 train_generator = train_datagen.flow_from_directory(
12     train_dir,
13     target_size = (150,150),#  输入训练图像尺寸
14     batch_size = 20,
15     class_mode = 'binary')  #
16 validation_dir = 'E:/Jupyter_files/mask_and_nomask_small/validation'  #指向验证集图片目录路径
17 validation_generator = test_datagen.flow_from_directory(
18     validation_dir,
19     target_size = (150,150),
20     batch_size = 20,
21     class_mode = 'binary')
22 for data_batch,labels_batch in train_generator:
23     print('data batch shape:',data_batch.shape)
24     print('data batch shape:',labels_batch.shape)
25     break #生成器不会停止,会循环生成这些批量,所以我们就循环生成一次批量

6.训练模型并保存

1 history = model.fit(
2                     train_generator,
3                     steps_per_epoch = 100,
4                     epochs = 50,
5                     validation_data = validation_generator,
6                     validation_steps = 50)
7 #将训练过程产生的数据保存为h5文件8 model.save('E:/Jupyter_files/new_mask_and_nomask_10epoch1.h5')

 6.绘制Epochs与Accuracy变化曲线图

%matplotlib inline
epochs = range(10) #50 is the number of epochs
train_acc = history.history['acc']
valid_acc = history.history['val_acc']
plt.plot(epochs, train_acc, 'bo',label = 'Training Accuracy')
plt.plot(epochs, valid_acc, 'r', label = 'Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

7.读取一样本观察特征图

7.1显示样本

img_path = "E:/Jupyter_files/mask_and_nomask_small/test/1220.png"
img = image_utils.load_img(img_path, target_size=(150,150))
img_tensor = image_utils.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor /= 255.
print(img_tensor.shape)
#显示样本
import matplotlib.pyplot as plt
plt.imshow(img_tensor[0])
plt.show()

7.2获得改变样本的特征图并显示第一层第五个特征图

layer_outputs = [layer.output for layer in model.layers[:8]]
activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
#获得改样本的特征图
activations = activation_model.predict(img_tensor)
first_layer_activation = activations[0]
plt.matshow(first_layer_activation[0,:,:,5],  cmap="viridis")

 

 7.3显示前2层的全部特征图

 1 layer_names = []
 2 for layer in model.layers[:2]:
 3     layer_names.append(layer.name)
 4 # 每行显示16个特征图
 5 images_pre_row = 16  #每行显示的特征图数
 6 # 循环8次显示8层的全部特征图
 7 for layer_name, layer_activation in zip(layer_names, activations):
 8     n_features = layer_activation.shape[-1] #保存当前层的特征图个数
 9     size = layer_activation.shape[1]  #保存当前层特征图的宽高
10     n_col = n_features // images_pre_row #计算当前层显示多少行
11     #生成显示图像的矩阵
12     display_grid = np.zeros((size*n_col, images_pre_row*size))
13     #遍历将每个特张图的数据写入到显示图像的矩阵中
14     for col in range(n_col):
15         for row in range(images_pre_row):
16             #保存该张特征图的矩阵(size,size,1)
17             channel_image = layer_activation[0,:,:,col*images_pre_row+row]
18             #为使图像显示更鲜明,作一些特征处理
19             channel_image -= channel_image.mean()
20             channel_image /= channel_image.std()
21             channel_image *= 64
22             channel_image += 128 
23             #把该特征图矩阵中不在0-255的元素值修改至0-255
24             channel_image = np.clip(channel_image, 0, 255).astype("uint8")
25             #该特征图矩阵填充至显示图像的矩阵中
26             display_grid[col*size:(col+1)*size, row*size:(row+1)*size] = channel_image
27     scale = 1./size
28     #设置该层显示图像的宽高
29     plt.figure(figsize=(scale*display_grid.shape[1],scale*display_grid.shape[0]))
30     plt.title(layer_name)
31     plt.grid(False)
32     #显示图像
33     plt.imshow(display_grid, aspect="auto", cmap="viridis")

8.读取测试像样本,改变其尺寸

 1 def convertjpg(jpgfile,outdir,width=150,height=150): #将图片缩小到(150,150)的大小
 2     img=Image.open(jpgfile)
 3     try:
 4         new_img=img.resize((width,height),Image.BILINEAR)   
 5         new_img.save(os.path.join(outdir,os.path.basename(new_file)))
 6     except Exception as e:
 7         print(e)
 8 jpgfile="E:/Jupyter_files/mask_and_nomask_small/test/0023.jpg"        
 9 new_file="E:/Jupyter_files/mask_and_nomask_small/19.jpg"
10 #图像大小改变到(150,150),文件名保存
11 convertjpg(jpgfile,r"E:/Jupyter_files/mask_and_nomask_small") 
12 img_scale = plt.imread('E:/Jupyter_files/mask_and_nomask_small/19.jpg')
13 plt.imshow(img_scale) 

9.导入测试样本图片进行预测

 1 model = load_model('E:/Jupyter_files/new_mask_and_nomask_10epoch1.h5')
 2 img_scale = plt.imread('E:/Jupyter_files/mask_onmask_small/19.jpg')
 3 img_scale = img_scale.reshape(1,150,150,3).astype('float32')
 4 img_scale = img_scale/255        #归一化到0-1之间
 5 out= model.predict(img_scale) #取图片信息
 6 img_scale = plt.imread('E:/Jupyter_files/mask_onmask_small/19.jpg')
 7 plt.imshow(img_scale)        #显示图片
 8 if out>0.5:
 9     print('该图片是带口罩的概率为:',out)
10 else:
11     print('该图片是不带口罩的概率为:',1-out)

 

 

全部代码附上:

  1 #需要导入的库
  2 import os
  3 import numpy as np
  4 import os.path
  5 import matplotlib.pyplot as plt
  6 from PIL import Image
  7 from keras import layers
  8 from keras import models
  9 from keras import optimizers
 10 from keras.preprocessing.image import ImageDataGenerator
 11 from keras.models import load_model
 12 from keras.utils import image_utils
 13 #检查分组中包含多少张图像
 14 train_path="E:/Jupyter_files/mask_and_nomask_small/train/"
 15 print('total training mask images:', len(os.listdir(train_path+"mask")))
 16 print('total training nomask images:', len(os.listdir(train_path+"nomask")))
 17 valid_path="E:/Jupyter_files/mask_and_nomask_small/validation/"
 18 print('total validation mask images:', len(os.listdir(valid_path+"mask")))
 19 print('total validation nomask images:', len(os.listdir(valid_path+"nomask")))
 20 test_path="E:/Jupyter_files/mask_and_nomask_small/test/"
 21 print('total test images:', len(os.listdir(test_path)))
 22 model = models.Sequential()
 23 #第一个卷积层作为输入层,32个3*3卷积核,输入形状input_shape = (150,150,3)
 24 # 输出图片尺寸:150-3+1=148*148,参数数量:32*3*3*3+32=896
 25 model.add(layers.Conv2D(32,(3,3),activation = 'relu',input_shape = (150,150,3)))
 26 model.add(layers.MaxPooling2D((2,2)))# 输出图片尺寸:148/2=74*74
 27 #输出图片尺寸:74-3+1=72*72,参数数量:64*3*3*32+64=18496
 28 model.add(layers.Conv2D(64,(3,3),activation = 'relu'))
 29 model.add(layers.MaxPooling2D((2,2)))# 输出图片尺寸:72/2=36*36
 30 # 输出图片尺寸:36-3+1=34*34,参数数量:128*3*3*64+128=73856
 31 model.add(layers.Conv2D(128,(3,3),activation = 'relu'))
 32 model.add(layers.MaxPooling2D((2,2)))# 输出图片尺寸:34/2=17*17
 33 model.add(layers.Conv2D(128,(3,3),activation = 'relu'))
 34 model.add(layers.MaxPooling2D((2,2)))
 35 model.add(layers.Flatten())
 36 model.add(layers.Dense(512,activation = 'relu'))
 37 model.add(layers.Dense(1,activation = 'sigmoid'))#sigmoid分类,输出是两类别
 38 #看一下特征图的维度如何随着每层变化
 39 model.summary()
 40 #图像在输入神经网络之前进行数据处理,建立训练和验证数据
 41 from keras.preprocessing.image import ImageDataGenerator
 42 model.compile(loss='binary_crossentropy',
 43               optimizer=optimizers.RMSprop(lr=1e-4),
 44               metrics=['acc'])
 45 #归一化
 46 train_datagen = ImageDataGenerator(rescale = 1./255)
 47 test_datagen = ImageDataGenerator(rescale = 1./255)
 48 train_dir = 'E:/Jupyter_files/mask_and_nomask_small/train'     #指向训练集图片目录路径
 49 train_generator = train_datagen.flow_from_directory(
 50     train_dir,
 51     target_size = (150,150),#  输入训练图像尺寸
 52     batch_size = 20,
 53     class_mode = 'binary')  #
 54 validation_dir = 'E:/Jupyter_files/mask_and_nomask_small/validation'  #指向验证集图片目录路径
 55 validation_generator = test_datagen.flow_from_directory(
 56     validation_dir,
 57     target_size = (150,150),
 58     batch_size = 20,
 59     class_mode = 'binary')
 60 for data_batch,labels_batch in train_generator:
 61     print('data batch shape:',data_batch.shape)
 62     print('data batch shape:',labels_batch.shape)
 63     break #生成器不会停止,会循环生成这些批量,所以我们就循环生成一次批量
 64     #训练模型50轮次
 65 history = model.fit(
 66                     train_generator,
 67                     steps_per_epoch = 100,
 68                     epochs = 50,
 69                     validation_data = validation_generator,
 70                     validation_steps = 50)
 71 #将训练过程产生的数据保存为h5文件
 72 from keras.models import load_model
 73 model.save('E:/Jupyter_files/new_mask_and_nomask_10epoch1.h5')
 74 #打印每个epoch的accuracy
 75 %matplotlib inline
 76 epochs = range(10) #50 is the number of epochs
 77 train_acc = history.history['acc']
 78 valid_acc = history.history['val_acc']
 79 plt.plot(epochs, train_acc, 'bo',label = 'Training Accuracy')
 80 plt.plot(epochs, valid_acc, 'r', label = 'Validation Accuracy')
 81 plt.xlabel('Epochs')
 82 plt.ylabel('Accuracy')
 83 plt.legend()
 84 plt.show()
 85 #从测试集中读取一条样本
 86 img_path = "E:/Jupyter_files/mask_and_nomask_small/test/1220.png"
 87 img = image_utils.load_img(img_path, target_size=(150,150))
 88 img_tensor = image_utils.img_to_array(img)
 89 img_tensor = np.expand_dims(img_tensor, axis=0)
 90 img_tensor /= 255.
 91 print(img_tensor.shape)
 92 #显示样本
 93 plt.imshow(img_tensor[0])
 94 plt.show()
 95 layer_outputs = [layer.output for layer in model.layers[:8]]
 96 activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
 97 #获得改样本的特征图
 98 activations = activation_model.predict(img_tensor)
 99 first_layer_activation = activations[0]
100 plt.matshow(first_layer_activation[0,:,:,5],  cmap="viridis")
101 layer_names = []
102 for layer in model.layers[:2]:
103     layer_names.append(layer.name)
104 # 每行显示16个特征图
105 images_pre_row = 16  #每行显示的特征图数
106 for layer_name, layer_activation in zip(layer_names, activations):
107     n_features = layer_activation.shape[-1] #保存当前层的特征图个数
108     size = layer_activation.shape[1]  #保存当前层特征图的宽高
109     n_col = n_features // images_pre_row #计算当前层显示多少行
110     #生成显示图像的矩阵
111     display_grid = np.zeros((size*n_col, images_pre_row*size))
112     #遍历将每个特张图的数据写入到显示图像的矩阵中
113     for col in range(n_col):
114         for row in range(images_pre_row):
115             #保存该张特征图的矩阵(size,size,1)
116             channel_image = layer_activation[0,:,:,col*images_pre_row+row]
117             #为使图像显示更鲜明,作一些特征处理
118             channel_image -= channel_image.mean()
119             channel_image /= channel_image.std()
120             channel_image *= 64
121             channel_image += 128 
122             #把该特征图矩阵中不在0-255的元素值修改至0-255
123             channel_image = np.clip(channel_image, 0, 255).astype("uint8")
124             #该特征图矩阵填充至显示图像的矩阵中
125             display_grid[col*size:(col+1)*size, row*size:(row+1)*size] = channel_image
126     scale = 1./size
127     #设置该层显示图像的宽高
128     plt.figure(figsize=(scale*display_grid.shape[1],scale*display_grid.shape[0]))
129     plt.title(layer_name)
130     plt.grid(False)
131     #显示图像
132     plt.imshow(display_grid, aspect="auto", cmap="viridis")
133     def convertjpg(jpgfile,outdir,width=150,height=150): #将图片缩小到(150,150)的大小
134     img=Image.open(jpgfile)
135     try:
136         new_img=img.resize((width,height),Image.BILINEAR)   
137         new_img.save(os.path.join(outdir,os.path.basename(new_file)))
138     except Exception as e:
139         print(e)
140 jpgfile="E:/Jupyter_files/mask_and_nomask_small/image_mask/0023.jpg"        
141 new_file="E:/Jupyter_files/mask_and_nomask_small/19.jpg"
142 #图像大小改变到(150,150),文件名保存
143 convertjpg(jpgfile,r"E:/Jupyter_files/mask_and_nomask_small") 
144 img_scale = plt.imread('E:/Jupyter_files/mask_and_nomask_small/19.jpg')
145 plt.imshow(img_scale) 
146 model = load_model('E:/Jupyter_files/new_mask_and_nomask_10epoch1.h5')
147 img_scale = plt.imread('E:/Jupyter_files/mask_and_nomask_small/19.jpg')
148 img_scale = img_scale.reshape(1,150,150,3).astype('float32')
149 img_scale = img_scale/255        #归一化到0-1之间
150 out= model.predict(img_scale) #取图片信息
151 img_scale = plt.imread('E:/Jupyter_files/mask_and_nomask_small/19.jpg')
152 plt.imshow(img_scale)        #显示图片
153 if out>0.5:
154     print('该图片是带口罩的概率为:',out)
155 else:
156     print('该图片是带口罩的概率为:',1-out)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

标签:150,口罩,plt,机器,img,mask,model,识别,nomask
From: https://www.cnblogs.com/wszw/p/16989435.html

相关文章

  • 基于Qlearning强化学习的机器人路线规划仿真
    1.算法概述假设我们的行为准则已经学习好了,现在我们处于状态s1,我在写作业,我有两个行为a1,a2,分别是看电视和写作业,根据我的经验,在这种s1状态下,a2写作业......
  • 基于Qlearning强化学习的机器人路线规划仿真
    1.算法概述       假设我们的行为准则已经学习好了,现在我们处于状态s1,我在写作业,我有两个行为a1,a2,分别是看电视和写作业,根据我的经验,在这种s1状......
  • [机器学习] Yellowbrick使用笔记5-回归可视化
    回归模型试图预测连续空间中的目标。回归计分可视化工具显示模型空间中的实例,以便更好地理解模型是如何进行预测的。Yellowbrick已经实施了三种回归评估:残差图ResidualsPlo......
  • [机器学习] Yellowbrick使用笔记4-目标可视化
    目标可视化工具专门用于直观地描述用于监督建模的因变量,通常称为y目标。当前实现了以下可视化:平衡箱可视化BalancedBinning:生成带有垂直线的直方图,垂直线显示推荐值点,以将......
  • [机器学习] Yellowbrick使用笔记3-特征分析可视化
    特征分析可视化工具设计用于在数据空间中可视化实例,以便检测可能影响下游拟合的特征或目标。因为ML操作高维数据集(通常至少35个),可视化工具将重点放在聚合、优化和其他技术上......
  • [机器学习] Yellowbrick使用笔记2-模型选择
    在本教程中,我们将查看各种ScikitLearn模型的分数,并使用Yellowbrick的可视化诊断工具对它们进行比较,以便为我们的数据选择最佳的模型。​​代码下载​​文章目录​​1使用......
  • [机器学习] Yellowbrick使用笔记1-快速入门
    Yellowbrick是一个机器学习可视化库,主要依赖于sklearn机器学习库,能够提供多种机器学习算法的可视化,主要包括特征可视化,分类可视化,回归可视化,回归可视化,聚类可视化,模型选择......
  • [机器学习] 特征选择笔记3-递归式特征消除
    特征选择​​​代码下载​​​本文主要介绍sklearn中进行特征选择的方法。​​sklearn.feature_selection​​模块中的类可用于样本集的特征选择/降维,以提高估计量的准确性......
  • [机器学习] 特征选择笔记4-使用SelectFromModel特征选择
    特征选择​​​代码下载​​​本文主要介绍sklearn中进行特征选择的方法。​​sklearn.feature_selection​​模块中的类可用于样本集的特征选择/降维,以提高估计量的准确性......
  • [机器学习] 特征选择笔记2-单变量特征选择
    特征选择​​​代码下载​​​本文主要介绍sklearn中进行特征选择的方法。​​sklearn.feature_selection​​模块中的类可用于样本集的特征选择/降维,以提高估计量的准确性......