首页 > 其他分享 >pytorch基础学习.md

pytorch基础学习.md

时间:2023-05-07 13:56:07浏览次数:43  
标签:md nn loss torch 学习 pytorch train test import

pytorch入门学习

来源: https://www.bilibili.com/video/BV1hE411t7RN

安装

# 1. 已安装nvidia相关驱动
# 2. 安装 python-pytorch-cuda
nsfoxer@ns-pc ~/Temp> yay -Qi python-pytorch-cuda numactl

基础使用

DataSet 数据集加载

继承DataSet类,并实现 get_item_

from torch.utils.data import Dataset

class MyData(Dataset):
    def __getitem__(self, index):
        return super().__getitem__(index)

    def __len__(self):
        return len([])

Tensorboard

用于生成loss图的工具

# 安装
yay -S python-setuptools tensorboard
# 代码
from torch.utils.tensorboard.writer import SummaryWriter

writer = SummaryWriter("logs")  # 将输出至logs文件夹下

# y = x * 10
for i in range(1000):
    writer.add_scalar("y=x", i*10, i)

writer.close()
# 查看
tensorboard --logdir logs
#!/bin/python
# 向 tensorboard添加图片
from torch.utils.tensorboard.writer import SummaryWriter
import numpy as np
from PIL import Image

writer = SummaryWriter("logs")
image_path  = "./train/ants_image/0013035.jpg"
img_PIL = Image.open(image_path)
img_array = np.array(img_PIL)

print(img_array.shape) # 这里的格式 HWC
writer.add_image("test", img_array, 1, dataformats="HWC")  # 设置格式为 HWC
# y = x * 10
for i in range(1000):
    writer.add_scalar("y=x", i*10, i)

writer.close()

Transform

为torch的一个工具,用于转换数据为tensor类型。

from PIL import Image
from torchvision import transforms

image_path  = "./train/ants_image/0013035.jpg"
img = Image.open(image_path)

tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)

Transvision

#!/bin/python
# transform学习
import torchvision
from torch.utils.tensorboard.writer import SummaryWriter

dataset_transform = torchvision.transforms.Compose([
    torchvision.transforms.ToTensor()
])

train_set = torchvision.datasets.CIFAR10(root="./datasets/", train=True, transform=dataset_transform, download=True)
test_set = torchvision.datasets.CIFAR10(root="./logs/", train=False, transform=dataset_transform, download=True)

writer = SummaryWriter("./logs/")
for i in range(10):
    img, target = test_set[i]
    writer.add_image("test_set", img, i)

writer.close()

Containers骨架

#!/bin/python
# nn
from torch import nn
import torch

class Base(nn.Module):
    def __init__(self) -> None:
        super().__init__()

    def forward(self, input):
        return input + 1

base = Base()
x = torch.tensor(1.0)
output = base(x)
print(output)

Conv2d 卷积

image-20230420151430999

  • stride: 卷积核每次移动的步长,默认为1。(sH, sW)

  • padding: 1 默认值为0

    image-20230420152656072

#!/bin/python
# 卷积
import torch
input = torch.tensor([
    [1, 2, 0, 3, 1],
    [0, 1, 2, 3, 1],
    [1, 2, 1, 0, 0],
    [5, 2, 3, 1, 1],
    [2, 1, 0, 1, 1]
])
# 卷积核
kernel = torch.tensor([
    [1, 2, 1],
    [0, 1, 0],
    [2, 1, 0]
])

# 调整shape
input = torch.reshape(input, (1, 1, 5, 5))
kernel = torch.reshape(kernel, (1, 1, 3, 3))

output = torch.nn.functional.conv2d(input, kernel, stride=1)
# 3x3
print(output)

output = torch.nn.functional.conv2d(input, kernel, stride=2)
# 2x2
print(output)

output = torch.nn.functional.conv2d(input, kernel, stride=1, padding=1)
# 5x5
print(output)
#!/bin/python
# 卷积
import torch
from torch.nn import Conv2d
from torch.utils.data import DataLoader
import torchvision
from torch.utils.tensorboard.writer import SummaryWriter

dataset = torchvision.datasets.CIFAR10("./datasets/", train=False, transform=torchvision.transforms.ToTensor(), download=True)

dataloader = DataLoader(dataset, batch_size=64)

class Study(torch.nn.Module):
    def __init__(self) -> None:
        super(Study, self).__init__()
        self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)


    def forward(self, x):
        x = self.conv1(x)
        return x

study = Study()
print(study)

writer = SummaryWriter("./logs/")
step = 0
for data in dataloader:
    imgs, targets = data
    output = study(imgs)
    # torch.Size([64, 3, 32, 32])
    writer.add_images("input", imgs, step)
    # torch.Size([64, 6, 30, 30])
    output = torch.reshape(output, (-1, 3, 30, 30))
    writer.add_images("output", output, step)
    step += 1
 writer.close()

MaxPool2d 最大池化

目的为压缩数据,同时保证数据特征不丢失

image-20230420170340403

#!/bin/python
# 最大池化操作
import torch
from torch.nn import MaxPool2d
input = torch.tensor([
    [1, 2, 0, 3, 1],
    [0, 1, 2, 3, 1],
    [1, 2, 1, 0, 0],
    [5, 2, 3, 1, 1],
    [2, 1, 0, 1, 1]
], dtype=torch.float32)
input = torch.reshape(input, (-1, 1, 5, 5))
print(input.shape)

class Study(torch.nn.Module):
    def __init__(self) -> None:
        super(Study, self).__init__()
        self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=True)

    def forward(self, input):
        output = self.maxpool1(input)
        return output

study = Study()
output = study(input)
print(output)

Linear Layer

Deep Neural Network With L - Layers - GeeksforGeeks

#!/bin/python
# 线性层
import torch
from torch.nn import Linear
from torch.utils.data import DataLoader
import torchvision

dataset = torchvision.datasets.CIFAR10("./datasets/", train=False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)

class Study(torch.nn.Module):
    def __init__(self) -> None:
        super(Study, self).__init__()
        self.linear = Linear(196608, 10)

    def forward(self, input):
        return self.linear(input)

study = Study()

for data in dataloader:
    imgs, target = data
    print(imgs.shape)
    output = torch.flatten(imgs) # 转为一维数据
    print(output.shape)
    output = study(output)
    print(output.shape)

CIFAR 10

img

#!/bin/python
# Seq
import torch
from torch.nn import Conv2d, Flatten, Linear, MaxPool2d, Sequential
import torchvision

class Study(torch.nn.Module):
    def __init__(self) -> None:
        super(Study, self).__init__()
        self.modle1 = Sequential(
                Conv2d(3, 32, 5, padding=2),
                MaxPool2d(2),
                Conv2d(32, 32, 5, padding=2),
                MaxPool2d(2),
                Conv2d(32, 64, 5, padding=2),
                MaxPool2d(2),
                Flatten(),
                Linear(1024, 64),
                Linear(64, 10)
                )

    def forward(self, input):
        return self.modle1(input)

study = Study()
input = torch.ones((64, 3, 32, 32))
output = study(input)
print(output.shape)

loss

  • L1Loss: 平均误差
  • MSELoss: 平方差
  • CrossEntropyLoss

image-20230421170814732

优化器

#!/bin/python
# Seq
import torch
from torch.nn import Conv2d, Flatten, Linear, MaxPool2d, Sequential
from torch.utils.data import DataLoader
import torchvision
from torchvision.transforms import transforms

dataset = torchvision.datasets.CIFAR10("./datasets/", train=False, transform=transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=1)

class Study(torch.nn.Module):
    def __init__(self) -> None:
        super(Study, self).__init__()
        self.modle1 = Sequential(
                Conv2d(3, 32, 5, padding=2),
                MaxPool2d(2),
                Conv2d(32, 32, 5, padding=2),
                MaxPool2d(2),
                Conv2d(32, 64, 5, padding=2),
                MaxPool2d(2),
                Flatten(),
                Linear(1024, 64),
                Linear(64, 10)
                )

    def forward(self, input):
        return self.modle1(input)

loss = torch.nn.CrossEntropyLoss()
study = Study()
optim = torch.optim.SGD(study.parameters(), lr=0.01)

for epoch in range(20):
    running_loss = 0.0
    for data in dataloader:
        imgs, targets = data
        outputs = study(imgs)
        result_loss = loss(outputs, targets)
        optim.zero_grad()
        # 反向传播
        result_loss.backward()
        optim.step()
        running_loss += result_loss
    print(running_loss)

模型保存和读取

import torch
import torchvision

# 保存
torch.save(vgg16, "16.pth")
# 读取
model = torch.load("16.pth")

# 保存  (模型参数)
torch.save(vgg16.state_dict(), "16.pth")
# 读取
model = torchvision.models.vgg16()
model.load_state_dict(torch.load("16.pth"))

模型训练套路

#!/usr/bin/env python
# 训练套路

from torch import nn
import torch
from torch.utils.data import DataLoader
import torchvision
from torchvision.transforms import transforms
from torch.utils.tensorboard.writer import SummaryWriter

# 数据记录
writer = SummaryWriter("./logs/")

# 1. 准备数据集
train_data = torchvision.datasets.CIFAR10("./datasets/", train=True, transform=transforms.ToTensor(), download=True)
# 2. 准备测试数据集
test_data = torchvision.datasets.CIFAR10("./datasets/", train=False, transform=transforms.ToTensor(), download=True)

print(f"训练数据集长度:{len(train_data)}\n测试数据集长度:{len(test_data)}")

# 3. 加载数据集
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

# 4. 搭建神经网络
class Study(nn.Module):
    def __init__(self, *args, **kwargs) -> None:
        super(Study, self).__init__(*args, **kwargs)
        self.model = nn.Sequential(
                nn.Conv2d(3, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 64, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Flatten(),
                nn.Linear(1024, 64),
                nn.Linear(64, 10),
        )
        
    def forward(self, x):
        return self.model(x)
# 测试网络是否正确
def test_study():
    study = Study()
    x = torch.ones((64, 3, 32, 32))
    y = study(x)
    print(y.shape)

# 5. 创建网络
study = Study()
# 6. 创建损失函数
loss_fn = nn.CrossEntropyLoss()
# 7. 优化器
learning_rate = 0.01
optimizer = torch.optim.SGD(study.parameters(), lr=learning_rate)

# 8. 设置训练网络参数
train_step = 0   # 训练的次数
test_step = 0    # 测试的次数
epoch = 10       # 训练的轮数

for i in range(epoch):
    print(f"第{i+1}次训练")
    # 训练开始
    study.train() # 设定为训练模式
    for (imgs, targets) in train_dataloader:
        # 训练
        outputs = study(imgs)
        # 损失
        loss = loss_fn(outputs, targets)
        # 清理梯度
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 优化
        optimizer.step()
        train_step += 1
        if train_step % 100 == 0:
            print(f"训练次数:{train_step}, loss={loss.item()}")
            writer.add_scalar("train_loss", loss.item(), train_step)
    

    # 本轮测试
    study.eval() # 设定为测试模式
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
        for (imgs, targets) in test_dataloader:
            outputs = study(imgs)
            loss = loss_fn(outputs, targets)
            total_test_loss += loss.item()
            total_accuracy += (outputs.argmax(1) == targets).sum()
    print("整体测试集的loss: ", total_test_loss)
    print("整体测试集的正确率: ", total_accuracy/len(test_data))
    test_step += 1
    writer.add_scalar("test_loss", total_test_loss, test_step)
    writer.add_scalar("test_accuracy", total_accuracy/len(test_data), test_step)

    # 保存本轮的训练模型
    torch.save(study.state_dict(), f"study_{i}.pth")
    print("模型已保存")

writer.close()

GPU训练

第一种

  • 网络模型
  • 数据(输入、 标注)
  • 损失函数

调用.cuda()即可

#!/usr/bin/env python
# 训练套路

from torch import nn
import torch
from torch.utils.data import DataLoader
import torchvision
from torchvision.transforms import transforms
from torch.utils.tensorboard.writer import SummaryWriter
import time


# 数据记录
writer = SummaryWriter("./logs/")

# 1. 准备数据集
train_data = torchvision.datasets.CIFAR10("./datasets/", train=True, transform=transforms.ToTensor(), download=True)
# 2. 准备测试数据集
test_data = torchvision.datasets.CIFAR10("./datasets/", train=False, transform=transforms.ToTensor(), download=True)

print(f"训练数据集长度:{len(train_data)}\n测试数据集长度:{len(test_data)}")

# 3. 加载数据集
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

# 4. 搭建神经网络
class Study(nn.Module):
    def __init__(self, *args, **kwargs) -> None:
        super(Study, self).__init__(*args, **kwargs)
        self.model = nn.Sequential(
                nn.Conv2d(3, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 64, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Flatten(),
                nn.Linear(1024, 64),
                nn.Linear(64, 10),
        )
        
    def forward(self, x):
        return self.model(x)
# 测试网络是否正确
def test_study():
    study = Study()
    x = torch.ones((64, 3, 32, 32))
    y = study(x)
    print(y.shape)

# 5. 创建网络
study = Study().cuda()
# 6. 创建损失函数
loss_fn = nn.CrossEntropyLoss().cuda()
# 7. 优化器
learning_rate = 0.01
optimizer = torch.optim.SGD(study.parameters(), lr=learning_rate)

# 8. 设置训练网络参数
train_step = 0   # 训练的次数
test_step = 0    # 测试的次数
epoch = 10       # 训练的轮数

start_time = time.time()
for i in range(epoch):
    print(f"第{i+1}次训练")
    # 训练开始
    study.train() # 设定为训练模式
    for (imgs, targets) in train_dataloader:
        (imgs, targets) = (imgs.cuda(), targets.cuda())
        # 训练
        outputs = study(imgs)
        # 损失
        loss = loss_fn(outputs, targets)
        # 清理梯度
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 优化
        optimizer.step()
        train_step += 1
        if train_step % 100 == 0:
            end_time = time.time()
            print("耗时: {}", end_time-start_time)
            print(f"训练次数:{train_step}, loss={loss.item()}")
            writer.add_scalar("train_loss", loss.item(), train_step)
    

    # 本轮测试
    study.eval() # 设定为测试模式
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
        for (imgs, targets) in test_dataloader:
            (imgs, targets) = (imgs.cuda(), targets.cuda())
            outputs = study(imgs)
            loss = loss_fn(outputs, targets)
            total_test_loss += loss.item()
            total_accuracy += (outputs.argmax(1) == targets).sum()
    print("整体测试集的loss: ", total_test_loss)
    print("整体测试集的正确率: ", total_accuracy/len(test_data))
    test_step += 1
    writer.add_scalar("test_loss", total_test_loss, test_step)
    writer.add_scalar("test_accuracy", total_accuracy/len(test_data), test_step)

    # 保存本轮的训练模型
    torch.save(study.state_dict(), f"study_{i}.pth")
    print("模型已保存")

writer.close()

第二种:直接调用

#!/usr/bin/env python
# 训练套路

from torch import nn
import torch
from torch.utils.data import DataLoader
import torchvision
from torchvision.transforms import transforms
from torch.utils.tensorboard.writer import SummaryWriter
import time

device = torch.device("cuda:0")

# 数据记录
writer = SummaryWriter("./logs/")

# 1. 准备数据集
train_data = torchvision.datasets.CIFAR10("./datasets/", train=True, transform=transforms.ToTensor(), download=True)
# 2. 准备测试数据集
test_data = torchvision.datasets.CIFAR10("./datasets/", train=False, transform=transforms.ToTensor(), download=True)
print(test_data.class_to_idx)


print(f"训练数据集长度:{len(train_data)}\n测试数据集长度:{len(test_data)}")

# 3. 加载数据集
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

# 4. 搭建神经网络
class Study(nn.Module):
    def __init__(self, *args, **kwargs) -> None:
        super(Study, self).__init__(*args, **kwargs)
        self.model = nn.Sequential(
                nn.Conv2d(3, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 64, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Flatten(),
                nn.Linear(1024, 64),
                nn.Linear(64, 10),
        )
        
    def forward(self, x):
        return self.model(x)
# 测试网络是否正确
def test_study():
    study = Study()
    x = torch.ones((64, 3, 32, 32))
    y = study(x)
    print(y.shape)

# 5. 创建网络
study = Study().to(device)
# 6. 创建损失函数
loss_fn = nn.CrossEntropyLoss().to(device)
# 7. 优化器
learning_rate = 0.01
optimizer = torch.optim.SGD(study.parameters(), lr=learning_rate)

# 8. 设置训练网络参数
train_step = 0   # 训练的次数
test_step = 0    # 测试的次数
epoch = 10       # 训练的轮数

start_time = time.time()
for i in range(epoch):
    print(f"第{i+1}次训练")
    # 训练开始
    study.train() # 设定为训练模式
    for (imgs, targets) in train_dataloader:
        (imgs, targets) = (imgs.to(device), targets.to(device))
        # 训练
        outputs = study(imgs)
        # 损失
        loss = loss_fn(outputs, targets)
        # 清理梯度
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 优化
        optimizer.step()
        train_step += 1
        if train_step % 100 == 0:
            end_time = time.time()
            print("耗时: {}", end_time-start_time)
            print(f"训练次数:{train_step}, loss={loss.item()}")
            writer.add_scalar("train_loss", loss.item(), train_step)
    

    # 本轮测试
    study.eval() # 设定为测试模式
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
        for (imgs, targets) in test_dataloader:
            (imgs, targets) = (imgs.to(device), targets.to(device))
            outputs = study(imgs)
            loss = loss_fn(outputs, targets)
            total_test_loss += loss.item()
            total_accuracy += (outputs.argmax(1) == targets).sum()
    print("整体测试集的loss: ", total_test_loss)
    print("整体测试集的正确率: ", total_accuracy/len(test_data))
    test_step += 1
    writer.add_scalar("test_loss", total_test_loss, test_step)
    writer.add_scalar("test_accuracy", total_accuracy/len(test_data), test_step)

    # 保存本轮的训练模型
    torch.save(study.state_dict(), f"study_{i}.pth")
    print("模型已保存")

writer.close()

使用已有的训练集

# 获取目标
print(test_data.class_to_idx)
{'airplane': 0, 'automobile': 1, 'bird': 2, 'cat': 3, 'deer': 4, 'dog': 5, 'frog': 6, 'horse': 7, 'ship': 8, 'truck': 9}
#!/usr/bin/env python
# 使用训练好的模型
from PIL import Image
from torch import nn
import torch
from torchvision.transforms import transforms

img_path = "./air.png"
img = Image.open(img_path)
img = img.convert("RGB")

transform = transforms.Compose([transforms.Resize((32, 32)),
                        transforms.ToTensor()])

img = transform(img)

class Study(nn.Module):
    def __init__(self, *args, **kwargs) -> None:
        super(Study, self).__init__(*args, **kwargs)
        self.model = nn.Sequential(
                nn.Conv2d(3, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 32, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Conv2d(32, 64, 5, 1, 2),
                nn.MaxPool2d(2),
                nn.Flatten(),
                nn.Linear(1024, 64),
                nn.Linear(64, 10),
        )
        
    def forward(self, x):
        return self.model(x)

model = Study()
model.load_state_dict(torch.load("./study.pth"))
img = torch.reshape(img, (1, 3, 32, 32))

model.eval()
with torch.no_grad():
    output = model(img)
    print(output)
    print(output.argmax(1))

动手深度学习

教材:https://zh-v2.d2l.ai/

视频:https://www.bilibili.com/video/BV1J54y187f9/

image-20230425215218280

  • 图片分类
  • 物体检测和分割
  • 样式迁移
  • 人脸合成
  • 文字生成图片
  • 文字生成 gpt3
  • 无人驾驶
  • 广告点击

数据操作和数据预处理

N维数组样例

N维数组是机器学习和神经网络的主要数据结构。

维度 数学? 表示 例子
0维数组 标量 1.0 一个类别
一维数组 向量 [1.0, 2,3, 3.7] 一个特征向量
二维数组 矩阵 [[1.0, 2.0,3.0],[4.0,5.0,6.0]] 一个样本-特征矩阵
三维数组 [[[1.,2.],[3.,4.]]] RGB图片(宽x高x通道)
四维数组 [[[[ ]]]] 一个RGB图片批量(批量x宽x高x通道)
五维数组 [[[[[ ..... ]]]]] 一个视频批量 (批量x时间x宽x高x通道)

创建数组:

  • 形状: 如 3x4的矩阵
  • 每个元素的类型: 如float32
  • 每个元素的值

访问元素:

  • image-20230425222629496

数组操作实现:

  • 张量: tersor 。可以通过shape访问张量的形状和元素总数。通过reshape()改变形状。zeros()创建全0。
  • 广播机制: (已取消此操作,num不同会报错)
  • 转换为numpy: X.numpy()
  • 将大小为1的张量(tensor)转换为python基本格式:a.item()float(a)

数据预处理:

# 读取csv
import pandas as pd

data = pd.read_csv("./test.csv")
print(data)
  • 缺失数据处理。可以插值或去除

标签:md,nn,loss,torch,学习,pytorch,train,test,import
From: https://www.cnblogs.com/nsfoxer/p/17379230.html

相关文章

  • yazi框架学习笔记
    主线程监听和建立客户端的连接接收客户端的请求数据,创建一个任务,该任务携带请求数据,并把该任务放入任务队列告诉分发线程,有请求任务过来了,叫他赶紧去处理重复上面三个步骤注意:主线程不处理具体请求分发线程查看任务队列,看是否有请求任务?没有任务则继续睡觉,否则把任务取......
  • 「学习笔记」双连通分量、割点与桥
    文章图片全部来自Oi-wiki,部分图片加以修改前面我们在学tarjan算法时,提到过强连通分量,即有向图上的环,那么无向图上是否也有强连通分量呢?很遗憾,没有但是,无向图有双连通分量!分为点双连通和边双连通(下面简称点双和边双)。边双连通分量概念在一张联通的无向图中,对于两个点\(x......
  • webpack的学习与使用(安装时以管理员身份运行)
    1、安装webpack2、测试是否安装成功3、写入相应代码之后,进行webpack打包自动新增一个文件夹:4、将bundle.js文件写入html页面打开浏览器查看结果:......
  • 关于docker的Cgroup Driver相关的配置说明以及其值为cgroupfs与systemd的区别
    在我们安装完docker-ce软件后(笔者这里安装的docker-ce-20.10.24-3.el8.x86_64)就可以直接启动docker服务 systemctlrestartdocker.service这时我们通过 dockerinfo命令,可以看到当前docker的一些配置信息,今天笔者主要是看CgroupDriver相关的,如下:[root@k8s-masterqq-5201......
  • 在编程语言越来越高级的情况下,程序员学习汇编有什么意义?
    汇编(Assembly)是一种计算机编程语言,用于编写计算机程序。与高级编程语言不同,汇编语言更接近计算机硬件的语言,可以直接控制计算机的底层操作。汇编语言使用助记符来表示指令和操作数,这些助记符可以被转换成计算机能够理解的机器语言指令。汇编语言编写的程序通常比高级语言编写的程序......
  • 机器学习系统架构的10个要素
    这是一个AI赋能的时代,而机器学习则是实现AI的一种重要技术手段。那么,是否存在一个通用的通用的机器学习系统架构呢?在老码农的认知范围内,Anythingisnothing,对系统架构而言尤其如此。但是,如果适用于大多数机器学习驱动的系统或用例,构建一个可扩展的、可靠的机器学习系统架构还是可......
  • [附课程学习笔记]CS231N assignment 3#1 _ RNN 学习笔记 & 解析
    欢迎来到assignment3从现在开始,网上的博客数量就少了很多.毕竟从现在,我们开始了更具体网络的学习.这里的组织形式可能会比较怪,我会将RNN相关的课程内容和代码混在一起,这样也可以同时作为学习笔记,也是考虑到RNN之后没有官方讲义之后自己概括性的评说,感觉比较好组织.......
  • Python flask成绩管理系统(课设、毕设、学习、源码下载)
    Pythonflask成绩管理系统后端:Python flask数据库:MySQL前端:html css js bootstrap等涉及功能:登录,登出,搜索,分类,排序,成绩管理,学生管理,班级管理,课程管理,数据统计分析,可视化图表 源码下载和功能展示:链接:https://pan.baidu.com/s/1D9cHH4Cy2jh6hgj3ZAWaDQ?pwd=q8le......
  • 2023.5.6 《动手学深度学习》第3、4章
    今天继续学习《动手学习深度学习》第5章:深度学习计算、第6章:卷积神经网络,今天学到的内容主要有这两章的概念。以及实现LeNet对FashionMNIST进行分类。一、理论部分:1、概念解释:1×1卷积的作用:卷积通常用于识别相邻元素间相互作用的能力,但1×1卷积不具备该能力,其主要用于调整输......
  • 漏洞扫描工具学习
    1、Nessus漏洞扫描Nessus简介:Nessus号称是世界上最流行的漏洞扫描程序,全世界有超过75000个组织在使用它。该工具提供完整的电脑漏洞扫描服务,并随时更新其漏洞数据库。Nessus不同于传统的漏洞扫描软件,Nessus可同时在本机或远端上遥控,进行系统的漏洞分析扫描。对于渗透测试人员......