torch.rand(*sizes, out=None) → Tensor
返回一个张量,包含了从区间[0, 1)的均匀分布中抽取的一组随机数。张量的形状由参size定义。
参数:
- sizes (int...) - 整数序列,定义了输出张量的形状
- out (Tensor, optinal) - 结果张量
torch.rand(2, 3)
0.0836 0.6151 0.6958
0.6998 0.2560 0.0139
[torch.FloatTensor of size 2x3]
torch.matmul 两个张量相乘
torch.mm(mat1, mat2, out=None) → Tensor
torch.matmul(mat1, mat2, out=None) → Tensor
对矩阵mat1
和mat2
进行相乘。 如果mat1
是一个n×m张量,mat2
是一个 m×p 张量,将会输出一个 n×p 张量out
- mat1 (Tensor) – 第一个相乘矩阵
- mat2 (Tensor) – 第二个相乘矩阵
- out (Tensor, optional) – 输出张量
transpoce()
transpoce()没有参数默认转置
transpoce(1,0) 不转
transpoce(0,1) 转
x = ([[0, 1],
[2, 3]])
#二维不填参数,默认转置
y1 = x.transpose()
y1 = ([[0, 2],
[1, 3]])
#二维填参数,按默认顺序填,表示不变换序列
y2 = x.transpose(0,1)
y2 = ([[0, 1],
[2, 3]])
#二维填参数,改变顺序填,表示变换序列
y3 = x.transpose(1,0)
y3 = ([[0, 2],
[1, 3]])
x = np.arange(4).reshape((2,2))
#0-4 2行2列
torch.no_grad()
一般用于神经网络的推理阶段, 表示张量的计算过程中无需计算梯度
grad()
方向导数和梯度。
#二分类问题
#逻辑回归例子
from turtle import forward
import torch
import torch.nn.functional as F
n_item = 1000 #数目条目
n_feature = 2 #特征维度
learning_rate = 0.001 #损失函数,学习率
epochs = 100 #训练论数
# fake data :数据
torch.manual_seed(123)
data_x = torch.randn(size=(n_item,n_feature)).float() #构造一个1000条维度为2的随机数据
data_y = torch.where(torch.subtract(data_x[:,0]*0.5, data_x[:,1]*1.5) > 0,1.,0.).float()
#如果第一个列的0.5倍比第二列的1.5度大 那么这个数就为1,否则为0
#torch.where(condition,a,b)其中输入参数condition:条件限制,如果满足条件,则选择a,否则选择b作为输出。
#torch.subtract(input, other, * , alpha = 1 , out = None ) →张量
#逻辑回国
class LogisticRegressionManuatlly(object):
def __init__(self) :
#回归模型 参数 w 和 b
self.w = torch.randn(size=(n_feature,1),requires_grad=True)
#requires_grad=True 表示w是参数 可以被求导,也可以被更新。 不加的话表示常量
self.b = torch.randn(size=(1,1),requires_grad = True)
# torch.rand(*sizes, out=None) → Tensor返回一个张量,包含了从区间[0, 1)的均匀分布中抽取的一组随机数。张量的形状由参size定义。
#前向计算
def forward(self,x):
y_hat = torch.sigmoid(torch.matmul(self.w.transpose(0,1),x) + self.b)
#torch.matmul 两个张量相乘 transpose(0,1)转置
return y_hat
#计算 loss
def loss_func(self,y_hat,y):
return -(torch.log(y_hat)*y + (1-y)*torch.log(1-y_hat))
#因为每次送一条数据 所以 M=1 这里就没有写出来
#训练
def train(self):
for epoch in range(epochs):
#1.加载数据
for step in range(n_item):
#2.前向计算
y_hat = self.forward(data_x[step])
y = data_y[step]# 这一步对应的真实值
#3.loss计算
loss = self.loss_func(y_hat,y)
#反向求导
loss.backward()
#5.更新参数
with torch.no_grad():
self.w.data -= learning_rate * self.w.grad.data
self.b.data -= learning_rate * self.b.grad.data
# grad() 求导数
#在下一次训练之前梯度归零
self.w.grad.data.zero_()
self.b.grad.data.zero_()
print('Epoch: %03d, loss: %.3f' % (epoch,loss.item()))
if __name__ =='__main__':
lrm = LogisticRegressionManuatlly()
lrm.train()
Epoch: 000, loss: 0.988
Epoch: 001, loss: 0.898
Epoch: 002, loss: 0.825
Epoch: 003, loss: 0.766
Epoch: 004, loss: 0.719
Epoch: 005, loss: 0.679
Epoch: 006, loss: 0.646
Epoch: 007, loss: 0.618
Epoch: 008, loss: 0.594
Epoch: 009, loss: 0.574
Epoch: 010, loss: 0.556
Epoch: 011, loss: 0.540
Epoch: 012, loss: 0.527
Epoch: 013, loss: 0.515
Epoch: 014, loss: 0.504
Epoch: 015, loss: 0.494
Epoch: 016, loss: 0.485
Epoch: 017, loss: 0.478
Epoch: 018, loss: 0.470
Epoch: 019, loss: 0.464
Epoch: 020, loss: 0.458
Epoch: 021, loss: 0.452
Epoch: 022, loss: 0.447
Epoch: 023, loss: 0.442
Epoch: 024, loss: 0.437
Epoch: 025, loss: 0.433
Epoch: 026, loss: 0.429
Epoch: 027, loss: 0.425
Epoch: 028, loss: 0.422
Epoch: 029, loss: 0.418
Epoch: 030, loss: 0.415
Epoch: 031, loss: 0.412
Epoch: 032, loss: 0.409
Epoch: 033, loss: 0.406
Epoch: 034, loss: 0.404
Epoch: 035, loss: 0.401
Epoch: 036, loss: 0.399
Epoch: 037, loss: 0.396
Epoch: 038, loss: 0.394
Epoch: 039, loss: 0.391
Epoch: 040, loss: 0.389
Epoch: 041, loss: 0.387
Epoch: 042, loss: 0.385
Epoch: 043, loss: 0.383
Epoch: 044, loss: 0.381
Epoch: 045, loss: 0.379
Epoch: 046, loss: 0.377
Epoch: 047, loss: 0.375
Epoch: 048, loss: 0.374
Epoch: 049, loss: 0.372
Epoch: 050, loss: 0.370
Epoch: 051, loss: 0.368
Epoch: 052, loss: 0.367
Epoch: 053, loss: 0.365
Epoch: 054, loss: 0.363
Epoch: 055, loss: 0.362
Epoch: 056, loss: 0.360
Epoch: 057, loss: 0.359
Epoch: 058, loss: 0.357
Epoch: 059, loss: 0.356
Epoch: 060, loss: 0.354
Epoch: 061, loss: 0.353
Epoch: 062, loss: 0.352
Epoch: 063, loss: 0.350
Epoch: 064, loss: 0.349
Epoch: 065, loss: 0.348
Epoch: 066, loss: 0.346
Epoch: 067, loss: 0.345
Epoch: 068, loss: 0.344
Epoch: 069, loss: 0.342
Epoch: 070, loss: 0.341
Epoch: 071, loss: 0.340
Epoch: 072, loss: 0.339
Epoch: 073, loss: 0.337
Epoch: 074, loss: 0.336
Epoch: 075, loss: 0.335
Epoch: 076, loss: 0.334
Epoch: 077, loss: 0.333
Epoch: 078, loss: 0.332
Epoch: 079, loss: 0.331
Epoch: 080, loss: 0.329
Epoch: 081, loss: 0.328
Epoch: 082, loss: 0.327
Epoch: 083, loss: 0.326
Epoch: 084, loss: 0.325
Epoch: 085, loss: 0.324
Epoch: 086, loss: 0.323
Epoch: 087, loss: 0.322
Epoch: 088, loss: 0.321
Epoch: 089, loss: 0.320
Epoch: 090, loss: 0.319
Epoch: 091, loss: 0.318
Epoch: 092, loss: 0.317
Epoch: 093, loss: 0.316
Epoch: 094, loss: 0.315
Epoch: 095, loss: 0.314
Epoch: 096, loss: 0.313
Epoch: 097, loss: 0.312
Epoch: 098, loss: 0.311
Epoch: 099, loss: 0.310
标签:loss,逻辑,示例,回归,torch,张量,Epoch,data,self
From: https://www.cnblogs.com/aohongchang/p/16736071.html