layout: post
title: CS231N-课后思考后笔记
subtitle: CS231N-课后思考后笔记
description: CS231N-课后思考后笔记
date: 2022-10-26
categories: deeplearning
tags: notes
comments: true
2022-10-26 CS231N-课后思考后笔记
Assigment 1
Assigment 2
Network visualization
1、Saliency Maps
Saliency Maps 即显著图,用于分析图片的哪些pixel对最终结果产生了影响,或者说影响程度
计算方法:
1、计算正确分类的分数
2、然后计算梯度
3、取梯度的绝对值
4、取每个通道的最大值
code :
def compute_saliency_maps(X, y, model):
"""
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images; Tensor of shape (N, 3, H, W)
- y: Labels for X; LongTensor of shape (N,)
- model: A pretrained CNN that will be used to compute the saliency map.
Returns:
- saliency: A Tensor of shape (N, H, W) giving the saliency maps for the input
images.
"""
# Make sure the model is in "test" mode
model.eval()
# Make input tensor require gradient
X.requires_grad_()
saliency = None
##############################################################################
# TODO: Implement this function. Perform a forward and backward pass through #
# the model to compute the gradient of the correct class score with respect #
# to each input image. You first want to compute the loss over the correct #
# scores (we'll combine losses across a batch by summing), and then compute #
# the gradients with a backward pass. #
##############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
logits = model.forward(X)
logits = logits.gather(1, y.view(-1, 1)).squeeze() # 得到正确分类
logits.backward(torch.FloatTensor([1., 1., 1., 1., 1.])) # 只计算正确分类部分的loss
saliency = abs(X.grad.data) # 返回X的梯度绝对值大小
saliency, _ = torch.max(saliency, dim=1)
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
2、Fooling Images
用一张来自于X的图片、来欺骗模型、使model以为这是Y
实现方法:
\(dX = learning_rate * g / ||g||_2\)
\(X = X+dX\)
3、随机噪声
通过一张随机生成的噪声图片、经过梯度更新、骗过model,认为是某个类别
Assigment 3
Self Supervised Learning
现象:
同样的模型、在通过自监督先学习抽取特征以后,在下游任务上(分类),比train from scratch 的效果要好很多。
思考:
1、模型通过自监督学习、本身已训练了更多的时间、而且是在更大的数据集上做的pretrain,直接比较略有不公
2、重新设置训练的参数、使自监督+fine tune的模型与train from scratch的模型在同样大的数据集上面、使用同样的训练策略、训练同样多的时间
发现自监督 + fine tune比直接训练的效果要好很多,说明前面的对比学习确实使模型学到了更好的特征表达能力
标签:10,26,CS231N,saliency,笔记,课后,compute,model
From: https://www.cnblogs.com/cyinen/p/17161606.html