实验五:全连接神经网络手写数字识别实验
【实验目的】
- 理解神经网络原理,掌握神经网络前向推理和后向传播方法;
- 掌握使用pytorch框架训练和推理全连接神经网络模型的编程实现方法。
【实验内容】
使用pytorch框架,设计一个全连接神经网络,实现Mnist手写数字字符集的训练与识别。
【实验报告要求】
- 修改神经网络结构,改变层数观察层数对训练和检测时间,准确度等参数的影响;
- 修改神经网络的学习率,观察对训练和检测效果的影响;
- 修改神经网络结构,增强或减少神经元的数量,观察对训练的检测效果的影响。
实验内容
- 导入相关库:
# 导入相关库
import torch
import torchvision
from torch.utils.data import DataLoader
- 准备数据集:
# 准备数据集
n_epochs = 3
batch_size_train = 64
batch_size_test = 1000
learning_rate = 0.01
momentum = 0.5
log_interval = 10
random_seed = 1
torch.manual_seed(random_seed)
# 下载数据集
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./data/', train=True, download=True, # 下载数据到data文件夹
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./data/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=True)
- 查看下载的一些图片:
import matplotlib.pyplot as plt
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(example_targets[i]))
plt.xticks([])
plt.yticks([])
plt.show()
- 构建网络:
# 构建网络
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
# 初始化网络和优化器
network = Net()
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
# 构建模型训练
train_losses = []
train_counter = []
test_losses = []
test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)]
- 训练函数:
# 运行一次测试循环,看看仅使用随机初始化的网络参数可以获得多大的精度/损失
def train(epoch):
network.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = network(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
train_losses.append(loss.item())
train_counter.append(
(batch_idx*64) + ((epoch-1)*len(train_loader.dataset)))
torch.save(network.state_dict(), './model.pth')
torch.save(optimizer.state_dict(), './optimizer.pth')
train(1)
Train Epoch: 1 [1920/60000 (3%)] Loss: 2.260613
Train Epoch: 1 [2560/60000 (4%)] Loss: 2.220656
Train Epoch: 1 [3200/60000 (5%)] Loss: 2.184241
Train Epoch: 1 [3840/60000 (6%)] Loss: 2.265190
Train Epoch: 1 [4480/60000 (7%)] Loss: 2.108070
Train Epoch: 1 [5120/60000 (9%)] Loss: 2.060574
Train Epoch: 1 [5760/60000 (10%)] Loss: 1.918511
Train Epoch: 1 [6400/60000 (11%)] Loss: 1.947299
...
# 手动测试test函数 输出精确值和损失值
def test():
network.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = network(data)
test_loss += F.nll_loss(output, target, size_average=False).item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: 损失值: {:.4f}, 精确值: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test()
Test set: 损失值: 0.1860, 精确值: 9464/10000 (95%)
# 现在进入测试循环。在这里,我们总结了测试损失,并跟踪正确分类的数字来计算网络的精度
for epoch in range(1, n_epochs + 1):
train(epoch)
test()
测试过程:
Train Epoch: 3 [45440/60000 (76%)] Loss: 0.171279
Train Epoch: 3 [46080/60000 (77%)] Loss: 0.238022
Train Epoch: 3 [46720/60000 (78%)] Loss: 0.211611
Train Epoch: 3 [47360/60000 (79%)] Loss: 0.205421
Train Epoch: 3 [48000/60000 (80%)] Loss: 0.278859
Train Epoch: 3 [48640/60000 (81%)] Loss: 0.273572
Train Epoch: 3 [49280/60000 (82%)] Loss: 0.273384
Train Epoch: 3 [49920/60000 (83%)] Loss: 0.189791
...
- 绘制训练曲线:
# 评估模型的性能
# 绘制训练曲线
fig = plt.figure()
plt.plot(train_counter, train_losses, color='blue')
plt.scatter(test_counter, test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('number of training examples seen')
plt.ylabel('negative log likelihood loss')
plt.show()
- 查看训练结果:
# 查看一部分预测结果
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
with torch.no_grad():
output = network(example_data)
# fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Prediction: {}".format(
output.data.max(1, keepdim=True)[1][i].item()))
plt.xticks([])
plt.yticks([])
plt.show()
通过测试结果知预测的结果基本正确。
标签:Loss,plt,Train,神经网络,train,实验,test,手写,data From: https://www.cnblogs.com/lm20010928/p/16926881.html