首页 > 其他分享 >基于机器学习的时间序列温度预测

基于机器学习的时间序列温度预测

时间:2023-12-12 16:13:28浏览次数:33  
标签:基于 机器 train self torch np test 序列 yt

本次研究是使用GRU模型和GRU-Attention模型对长时间序列温度数据进行预测拟合,对于这两个模型有兴趣的可以去网上了解一下,

首先是日数据预测,由于日数据存在缺失值需要对缺失值进行填补,

在对存在缺失值的数据中我使用三次样方插值对数据进行处理,其代码如下:

import pandas as pd
import numpy as np
from scipy.interpolate import interp1d
 
# 假设我们有一组带有缺失值的数据
df = pd.read_csv('your_data.csv')
 
field = df['wd']
 
field = np.where(field == -999, np.nan, field)
 
# 创建一个插值函数,使用三次样条插值方法
 
interpolator = interp1d(np.arange(len(field))[~np.isnan(field)], field[~np.isnan(field)], kind='cubic')
print(type(np.arange(len(field))[~np.isnan(field)]))
# 遍历数据,对缺失值进行插值
filled_data = np.where(np.isnan(field), interpolator(np.arange(len(field))), field)
 
# 创建一个 DataFrame 对象
df['wd_1'] = filled_data
 
# 将结果输出到 Excel 文件
df.to_csv('new_zp.csv', index=False)

对于处理好的数据需要对其构建数据集,本次由于数据量不大,未使用DataLoader等模块,并且归一化:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
from torchsummary import summary
import torch.cuda
 
loc = r'your_data_after_prefect.csv'
data_csv = pd.read_csv(loc, header=None)
 
yt = data_csv.iloc[2:-1, 5]
yt_1 = yt.shift(1)
yt_2 = yt.shift(2)
yt_3 = yt.shift(3)
yt_4 = yt.shift(4)
yt_5 = yt.shift(5)
yt_6 = yt.shift(6)
yt_7 = yt.shift(7)
data = pd.concat([yt, yt_1, yt_2, yt_3, yt_4, yt_5,yt_6,yt_7], axis=1)
 
data.columns = ['yt', 'yt_1', 'yt_2', 'yt_3', 'yt_4', 'yt_5','yt_6','yt_7']
data.head(10)
data = data.dropna()
x1 = np.array(data['yt_1'], dtype=np.float32)
x1 = torch.tensor(x1)
x2 = torch.tensor(np.array(data['yt_2'], dtype=np.float32))
x3 = torch.tensor(np.array(data['yt_3'], dtype=np.float32))
x4 = torch.tensor(np.array(data['yt_4'], dtype=np.float32))
x5 = torch.tensor(np.array(data['yt_5'], dtype=np.float32))
x6 = torch.tensor(np.array(data['yt_6'], dtype=np.float32))
x7 = torch.tensor(np.array(data['yt_7'], dtype=np.float32))
 
x = torch.cat((x7,x6,x5,x4,x3,x1,x2), dim=0)
x = x.reshape(7, -1).T
 
y = np.array(data['yt'], dtype=np.float32)
y = y.reshape(len(y), 1)
y = torch.tensor(y)
 
scaler_x = preprocessing.MinMaxScaler(feature_range=(-1, 1))
scaler_y = preprocessing.MinMaxScaler(feature_range=(-1, 1))
 
x = scaler_x.fit_transform(x)
y = scaler_y.fit_transform(y)
 
train_end = 5479
 
x_train = torch.tensor(x[0:train_end, ], dtype=torch.float32)
y_train = torch.tensor(y[0:train_end, ], dtype=torch.float32)
x_test = torch.tensor(x[train_end + 1:-1], dtype=torch.float32)
y_test = torch.tensor(y[train_end + 1:-1], dtype=torch.float32)
 
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

我在这里调用了GPU,实际上我的数据量不大,可以不调用GPU,之后就是构建模型和训练,并且对于前面归一化的数据需要进行反归一化,才能进行指标的计算,代码如下:

seed = 2019
np.random.seed(seed)
 
class GRUModel(nn.Module):
    def __init__(self):
        super(GRUModel, self).__init__()
        self.gru = nn.GRU(input_size=1, hidden_size=32, num_layers=1)
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, 1)
 
    def forward(self, x):
        out, _ = self.gru(x)
        out = self.fc1(out[:,-1,:])
        out = self.fc(self.act1(out))
        out = self.dense(self.act2(out))
        return out
 
 
model = GRUModel()
 
# 计算参数数量
params_count = sum(p.numel() for p in model.parameters() if p.requires_grad)
 
# 打印模型和参数数量
print(model)
print("Total params: ", params_count)
torch.save(model, 'qh_gru.pt')
#model = torch.load('gru.pt')
 
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size = 64
num_epochs = 50
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
        
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
 
y_test = scaler_y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = scaler_y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))

这里计算一些指标来解释拟合效果,并且将结果制图:

rmse = np.sqrt(mean_squared_error(y_test, predictions))
print("RMSE:", rmse)
 
mae = mean_absolute_error(y_test, predictions)
print("MAE:", mae)
 
r2 = r2_score(y_test, predictions)
print("R2:", r2)
 
def calculate_mape(actual, predicted):
    if len(actual) != len(predicted):
        raise ValueError("actual and predicted lists must have the same length")
    if 0 in actual:
        raise ValueError("actual list must not contain zero values")
    
    percentage_errors = [abs((actual[i] - predicted[i]) / actual[i]) for i in range(len(actual))]
    mape = sum(percentage_errors) * 100 / len(actual)
    return mape
mape = calculate_mape(y_test, predictions)
print(f"MAPE: {mape}")
 
def calculate_IA(observed, predicted):
    numerator = np.sum((observed - predicted) ** 2)
    denominator = np.sum((np.abs(predicted - np.mean(observed)) + np.abs(observed - np.mean(observed))) ** 2)
    ia = 1 - (numerator / denominator)
    return ia
 
ia_value = calculate_IA(y_test, predictions)
print("IA值:", ia_value)
 
plt.plot(y_test)
plt.plot(predictions)
plt.legend('target', 'prediction')
plt.show()

以上是使用GRU来处理,下面是用GRU-Attention处理的相同数据:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
 
 
loc = r'your_data.csv'
data_csv = pd.read_csv(loc, header=None)
 
yt = data_csv.iloc[2:-1, 5]
yt_1 = yt.shift(1)
yt_2 = yt.shift(2)
yt_3 = yt.shift(3)
yt_4 = yt.shift(4)
yt_5 = yt.shift(5)
yt_6 = yt.shift(6)
yt_7 = yt.shift(7)
data = pd.concat([yt, yt_1, yt_2, yt_3, yt_4, yt_5,yt_6,yt_7], axis=1)
 
data.columns = ['yt', 'yt_1', 'yt_2', 'yt_3', 'yt_4', 'yt_5','yt_6','yt_7']
data.head(10)
data = data.dropna()
x1 = np.array(data['yt_1'], dtype=np.float32)
x1 = torch.tensor(x1)
x2 = torch.tensor(np.array(data['yt_2'], dtype=np.float32))
x3 = torch.tensor(np.array(data['yt_3'], dtype=np.float32))
x4 = torch.tensor(np.array(data['yt_4'], dtype=np.float32))
x5 = torch.tensor(np.array(data['yt_5'], dtype=np.float32))
x6 = torch.tensor(np.array(data['yt_6'], dtype=np.float32))
x7 = torch.tensor(np.array(data['yt_7'], dtype=np.float32))
 
x = torch.cat((x7,x6,x5,x4,x3,x2,x1), dim=0)
x = x.reshape(7, -1).T
 
y = np.array(data['yt'], dtype=np.float32)
y = y.reshape(len(y), 1)
y = torch.tensor(y)
 
scaler_x = preprocessing.MinMaxScaler(feature_range=(-1, 1))
scaler_y = preprocessing.MinMaxScaler(feature_range=(-1, 1))
 
x = scaler_x.fit_transform(x)
y = scaler_y.fit_transform(y)
 
train_end = 5479
 
x_train = torch.tensor(x[0:train_end, ], dtype=torch.float32)
y_train = torch.tensor(y[0:train_end, ], dtype=torch.float32)
x_test = torch.tensor(x[train_end + 1:-1], dtype=torch.float32)
y_test = torch.tensor(y[train_end + 1:-1], dtype=torch.float32)
 
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))
print(x_train.shape)  # x_train.shape=torch.Size([5479, 7, 1])
print(y_train.shape)  # y_train.shape=torch.Size([5479, 1])
print(x_test.shape)  # x_test.shape=torch.Size([5479, 7, 1])
print(y_test.shape)  # y_test.shape=torch.Size([5479, 1])
seed = 2019
np.random.seed(seed)
class GRUAttention(nn.Module):
    def __init__(self, input_size, hidden_size, attention_size, output_size):
        super(GRUAttention, self).__init__()
        self.hidden_size = hidden_size
        
        # 定义 GRU 层
        self.gru = nn.GRU(input_size, hidden_size, batch_first=True)
        
        # 定义自注意力层
        self.query = nn.Linear(hidden_size, attention_size)
        self.key = nn.Linear(hidden_size, attention_size)
        self.energy = nn.Linear(attention_size,1)
        self.tran = nn.Linear(32, 32)
        
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc3 = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, output_size)
        
#         # 定义全连接层
#         self.fc = nn.Linear(hidden_size, output_size)
 
    def forward(self, x):
        # GRU 步骤
        hidden, _ = self.gru(x)
        
        # 自注意力步骤
        query = self.query(hidden)
        key = self.key(hidden)
        energy = self.energy(torch.tanh(query + key))
        attention_weights = torch.softmax(energy, dim=1)
        attended_hidden = torch.sum(hidden * attention_weights, dim=1)
        
#         out = self.tran(attended_hidden)
        out = self.fc1(attended_hidden)
        out = self.act1(out)
        out = self.fc3(out)
        # 全连接层
        out = self.dense(self.act2(out))
        
        return out
    
input_size = 1
hidden_size = 32
attention_size = 32
output_size = 1
model = GRUAttention(input_size, hidden_size, attention_size, output_size)
 
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size =128
num_epochs = 50
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
    
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
y_test = scaler_y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = scaler_y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))
 
rmse = np.sqrt(mean_squared_error(y_test, predictions))
 
print("RMSE:", rmse)
 
mae = mean_absolute_error(y_test, predictions)
print("MAE:", mae)
 
r2 = r2_score(y_test, predictions)
print("R2:", r2)
def calculate_mape(actual, predicted):
    if len(actual) != len(predicted):
        raise ValueError("actual and predicted lists must have the same length")
    if 0 in actual:
        raise ValueError("actual list must not contain zero values")
    
    percentage_errors = [abs((actual[i] - predicted[i]) / actual[i]) for i in range(len(actual))]
    mape = sum(percentage_errors) *100 / len(actual)
    return mape
mape = calculate_mape(y_test, predictions)
print("MAPE:", mape)
 
def calculate_IA(observed, predicted):
    numerator = np.sum((observed - predicted) ** 2)
    denominator = np.sum((np.abs(predicted - np.mean(observed)) + np.abs(observed - np.mean(observed))) ** 2)
    ia = 1 - (numerator / denominator)
    return ia
 
ia_value = calculate_IA(y_test, predictions)
print("IA值:", ia_value)
 
plt.plot(y_test)
plt.plot(predictions)
plt.legend('target', 'prediction')
plt.show()

当然在对GRU-Attention构建时也可以使用另外一种方法,自己定义Attention类:

class Attention(nn.Module):
    def __init__(self,embed_dim):
        super(Attention,self).__init__()
        self.query = nn.Linear(embed_dim,embed_dim)
        self.key = nn.Linear(embed_dim,embed_dim)
        self.value = nn.Linear(embed_dim,embed_dim)
        self.act = nn.Tanh()
        
    def forward(self,x):
        q = self.act(self.query(x))
        k = self.act(self.key(x))
        v = self.act(self.value(x))
        attn_weights = torch.matmul(q,k.transpose(1,2))
        attn_weights = nn.functional.softmax(attn_weights,dim=-1)
        attended_values = torch.matmul(attn_weights,v)
        return attended_values
 
class GRUModel(nn.Module):
    def __init__(self):
        super(GRUModel, self).__init__()
        self.gru = nn.GRU(input_size=1, hidden_size=32, num_layers=1)
        self.attention = Attention(32)
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, 1)
 
    def forward(self, x):
        print(x.shape)
        out, _ = self.gru(x)
        out = self.attention(out)
        out = self.fc1(out[:,-1,:])
        out = self.fc(self.act1(out))
        out = self.dense(self.act2(out))
        return out
 
model = GRUModel()

--------------------------------------------------------------------------------------------------------------------------------

以上是针对时间分辨率为天的日数据,下面是对时间周期为4小时的数据的GRU模型:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
 
loc = '周期为4小时数据.csv'
data_csv = pd.read_csv(loc, header=None)
 
yt = data_csv.iloc[1:-1, 1]
yt_1 = yt.shift(1)
yt_2 = yt.shift(2)
yt_3 = yt.shift(3)
yt_4 = yt.shift(4)
yt_5 = yt.shift(5)
yt_6 = yt.shift(6)
yt_7 = yt.shift(7)
data = pd.concat([yt, yt_1, yt_2, yt_3, yt_4, yt_5,yt_6,yt_7], axis=1)
 
data.columns = ['yt', 'yt_1', 'yt_2', 'yt_3', 'yt_4', 'yt_5','yt_6','yt_7']
data.head(10)
data = data.dropna()
x1 = np.array(data['yt_1'], dtype=np.float32)
x1 = torch.tensor(x1)
x2 = torch.tensor(np.array(data['yt_2'], dtype=np.float32))
x3 = torch.tensor(np.array(data['yt_3'], dtype=np.float32))
x4 = torch.tensor(np.array(data['yt_4'], dtype=np.float32))
x5 = torch.tensor(np.array(data['yt_5'], dtype=np.float32))
x6 = torch.tensor(np.array(data['yt_6'], dtype=np.float32))
x7 = torch.tensor(np.array(data['yt_7'], dtype=np.float32))
 
x = torch.cat((x7,x6,x5,x4,x3,x2,x1), dim=0)
x = x.reshape(7, -1).T
 
y = np.array(data['yt'], dtype=np.float32)
y = y.reshape(len(y), 1)
y = torch.tensor(y)
 
scaler_x = preprocessing.MinMaxScaler(feature_range=(-1, 1))
scaler_y = preprocessing.MinMaxScaler(feature_range=(-1, 1))
 
x = scaler_x.fit_transform(x)
y = scaler_y.fit_transform(y)
 
train_end = 4466
 
x_train = torch.tensor(x[0:train_end, ], dtype=torch.float32)
y_train = torch.tensor(y[0:train_end, ], dtype=torch.float32)
x_test = torch.tensor(x[train_end + 1:-1], dtype=torch.float32)
y_test = torch.tensor(y[train_end + 1:-1], dtype=torch.float32)
 
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))
 
seed = 2019
np.random.seed(seed)
 
 
class GRUModel(nn.Module):
    def __init__(self):
        super(GRUModel, self).__init__()
        self.gru = nn.GRU(input_size=1, hidden_size=32, num_layers=1)
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, 1)
 
    def forward(self, x):
        out, _ = self.gru(x)
        out = self.fc1(out[:,-1,:])
        out = self.fc(self.act1(out))
        out = self.dense(self.act2(out))
        return out
 
 
# model = GRUModel()
# torch.save(model, 'gru2.pt')
model = torch.load('gru2.pt')
 
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size = 48
num_epochs = 50
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
        
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
y_test = scaler_y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = scaler_y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))
 
rmse = np.sqrt(mean_squared_error(y_test, predictions))
print("RMSE:", rmse)
 
mae = mean_absolute_error(y_test, predictions)
print("MAE:", mae)
 
r2 = r2_score(y_test, predictions)
print("R2:", r2)
 
def calculate_mape(actual, predicted):
    if len(actual) != len(predicted):
        raise ValueError("actual and predicted lists must have the same length")
    if 0 in actual:
        raise ValueError("actual list must not contain zero values")
    
    percentage_errors = [abs((actual[i] - predicted[i]) / actual[i]) for i in range(len(actual))]
    mape = sum(percentage_errors) *100 / len(actual)
    return mape
mape = calculate_mape(y_test, predictions)
print("MAPE:", mape)
 
def calculate_IA(observed, predicted):
    numerator = np.sum((observed - predicted) ** 2)
    denominator = np.sum((np.abs(predicted - np.mean(observed)) + np.abs(observed - np.mean(observed))) ** 2)
    ia = 1 - (numerator / denominator)
    return ia
 
ia_value = calculate_IA(y_test, predictions)
print("IA值:", ia_value)
 
plt.plot(y_test)
plt.plot(predictions)
plt.legend('target', 'prediction')
plt.show()

GRU-Attention模型:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
 
 
loc = 'D:/研一作业_全/dataset/zhangpin (2).csv'
data_csv = pd.read_csv(loc, header=None)
 
yt = data_csv.iloc[1:-1, 1]
yt_1 = yt.shift(1)
yt_2 = yt.shift(2)
yt_3 = yt.shift(3)
yt_4 = yt.shift(4)
yt_5 = yt.shift(5)
yt_6 = yt.shift(6)
yt_7 = yt.shift(7)
data = pd.concat([yt, yt_1, yt_2, yt_3, yt_4, yt_5,yt_6,yt_7], axis=1)
 
data.columns = ['yt', 'yt_1', 'yt_2', 'yt_3', 'yt_4', 'yt_5','yt_6','yt_7']
data.head(10)
data = data.dropna()
x1 = np.array(data['yt_1'], dtype=np.float32)
x1 = torch.tensor(x1)
x2 = torch.tensor(np.array(data['yt_2'], dtype=np.float32))
x3 = torch.tensor(np.array(data['yt_3'], dtype=np.float32))
x4 = torch.tensor(np.array(data['yt_4'], dtype=np.float32))
x5 = torch.tensor(np.array(data['yt_5'], dtype=np.float32))
x6 = torch.tensor(np.array(data['yt_6'], dtype=np.float32))
x7 = torch.tensor(np.array(data['yt_7'], dtype=np.float32))
 
x = torch.cat((x7,x6,x5,x4,x3,x2,x1), dim=0)
x = x.reshape(7, -1).T
 
y = np.array(data['yt'], dtype=np.float32)
y = y.reshape(len(y), 1)
y = torch.tensor(y)
 
scaler_x = preprocessing.MinMaxScaler(feature_range=(-1, 1))
scaler_y = preprocessing.MinMaxScaler(feature_range=(-1, 1))
 
x = scaler_x.fit_transform(x)
y = scaler_y.fit_transform(y)
 
train_end = 5241
 
x_train = torch.tensor(x[0:train_end, ], dtype=torch.float32)
y_train = torch.tensor(y[0:train_end, ], dtype=torch.float32)
x_test = torch.tensor(x[train_end + 1:-1], dtype=torch.float32)
y_test = torch.tensor(y[train_end + 1:-1], dtype=torch.float32)
 
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))
 
seed = 2019
np.random.seed(seed)
 
class GRUAttention(nn.Module):
    def __init__(self, input_size, hidden_size, attention_size, output_size):
        super(GRUAttention, self).__init__()
        self.hidden_size = hidden_size
        
        # 定义 GRU 层
        self.gru = nn.GRU(input_size, hidden_size, batch_first=True)
        
        # 定义自注意力层
        self.query = nn.Linear(hidden_size, attention_size)
        self.key = nn.Linear(hidden_size, attention_size)
        self.energy = nn.Linear(attention_size,1)
        self.tran = nn.Linear(32, 32)
        
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc3 = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, output_size)
        
#         # 定义全连接层
#         self.fc = nn.Linear(hidden_size, output_size)
 
    def forward(self, x):
        # GRU 步骤
        hidden, _ = self.gru(x)
        
        # 自注意力步骤
        query = self.query(hidden)
        key = self.key(hidden)
        energy = self.energy(torch.tanh(query + key))
        attention_weights = torch.softmax(energy, dim=1)
        attended_hidden = torch.sum(hidden * attention_weights, dim=1)
        
#         out = self.tran(attended_hidden)
        out = self.fc1(attended_hidden)
        out = self.act1(out)
        out = self.fc3(out)
        # 全连接层
        out = self.dense(self.act2(out))
        
        return out
    
input_size = 1
hidden_size = 32
attention_size = 32
output_size = 1
# model = GRUAttention(input_size, hidden_size, attention_size, output_size)
 
# class GRUAttentionModel(nn.Module):
#     def __init__(self):
#         super(GRUAttentionModel, self).__init__()
#         self.gru = nn.GRU(input_size=1, hidden_size=32, batch_first = True)
#         self.attention =nn.Linear(32,32)
#         self.fc1 = nn.Linear(32, 16)
#         self.act1 = nn.Tanh()
#         self.fc = nn.Linear(16, 4)
#         self.act2 = nn.Tanh()
#         self.dense = nn.Linear(4, 1)
 
#     def forward(self, x):
#         out, hidden = self.gru(x)
#         print(out.shape)
#         print(hidden.shape)
#         attention_weights = torch.softmax(self.attention(out), dim=1)
#         out = torch.sum(attention_weights * out, dim=1)
#         out = self.fc1(self.act1(out))
#         out = self.fc(self.act2(out))
#         out = self.dense(out)
#         return out
 
 
# model = GRUAttentionModel()
 
# class GRUWithAttention(nn.Module):
#     def __init__(self, input_dim, hidden_dim, output_dim):
#         super(GRUWithAttention, self).__init__()
#         self.gru = nn.GRU(input_dim, hidden_dim, bidirectional=True)
#         self.attention = nn.Linear(hidden_dim * 2, 1)
#         self.fc = nn.Linear(hidden_dim * 2, output_dim)
 
#     def forward(self, x):
#         output, _ = self.gru(x)
#         attention_weights = torch.softmax(self.attention(output), dim=0)
#         context = torch.sum(output * attention_weights, dim=0)
#         prediction = self.fc(context)
#         return prediction
 
 
# model = GRUAttentionModel()
# 数据准备
# input_dim = 1  # 输入特征维度
# hidden_dim = 32  # GRU的隐藏层维度
# output_dim = 1  # 输出特征维度
 
# torch.save(model, 'gru-attention.pt')
 
model = torch.load('gru-attention1.pt')
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size = 48
num_epochs = 100
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
    
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
y_test = scaler_y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = scaler_y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))
 
rmse = np.sqrt(mean_squared_error(y_test, predictions))
 
print("RMSE:", rmse)
 
mae = mean_absolute_error(y_test, predictions)
print("MAE:", mae)
 
r2 = r2_score(y_test, predictions)
print("R2:", r2)
 
def calculate_mape(actual, predicted):
    if len(actual) != len(predicted):
        raise ValueError("actual and predicted lists must have the same length")
    if 0 in actual:
        raise ValueError("actual list must not contain zero values")
    
    percentage_errors = [abs((actual[i] - predicted[i]) / actual[i]) for i in range(len(actual))]
    mape = sum(percentage_errors) * 100 / len(actual)
    return mape
mape = calculate_mape(y_test, predictions)
print("MAPE:", mape)
 
def calculate_IA(observed, predicted):
    numerator = np.sum((observed - predicted) ** 2)
    denominator = np.sum((np.abs(predicted - np.mean(observed)) + np.abs(observed - np.mean(observed))) ** 2)
    ia = 1 - (numerator / denominator)
    return ia
 
ia_value = calculate_IA(y_test, predictions)
print("IA值:", ia_value)
 
plt.plot(y_test)
plt.plot(predictions)
plt.legend('target', 'prediction')
plt.show()
y_test = np.ravel(y_test)
predictions = np.ravel(predictions)
fit = np.polyfit(y_test, predictions, 1)
fit_line = np.polyval(fit, y_test)
plt.scatter(y_test, predictions, label='Data')
plt.plot(y_test, fit_line, color='red', label='Fit Line')
plt.xlabel("real_value")
plt.ylabel("prediction_value")
 
# 设置图标题和显示R方
plt.title(f"Scatter Map (R-squared = {r2:.4f})")
 
# 显示图例
plt.legend()
 
# 显示图形
plt.show()

--------------------------------------------------------------------------------------------------------------------------------

以下是对单变量时间分辨率为1小时的GRU模型预测:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
 
loc = 'D:/研一作业_全/dataset/zhangpin (2).csv'
data_csv = pd.read_csv(loc, header=None)
 
yt = data_csv.iloc[1:-1, 1]
yt_1 = yt.shift(1)
yt_2 = yt.shift(2)
yt_3 = yt.shift(3)
yt_4 = yt.shift(4)
yt_5 = yt.shift(5)
yt_6 = yt.shift(6)
yt_7 = yt.shift(7)
data = pd.concat([yt, yt_1, yt_2, yt_3, yt_4, yt_5,yt_6,yt_7], axis=1)
 
data.columns = ['yt', 'yt_1', 'yt_2', 'yt_3', 'yt_4', 'yt_5','yt_6','yt_7']
data.head(10)
data = data.dropna()
x1 = np.array(data['yt_1'], dtype=np.float32)
x1 = torch.tensor(x1)
x2 = torch.tensor(np.array(data['yt_2'], dtype=np.float32))
x3 = torch.tensor(np.array(data['yt_3'], dtype=np.float32))
x4 = torch.tensor(np.array(data['yt_4'], dtype=np.float32))
x5 = torch.tensor(np.array(data['yt_5'], dtype=np.float32))
x6 = torch.tensor(np.array(data['yt_6'], dtype=np.float32))
x7 = torch.tensor(np.array(data['yt_7'], dtype=np.float32))
 
x = torch.cat((x7,x6,x5,x4,x3,x2,x1), dim=0)
x = x.reshape(7, -1).T
 
y = np.array(data['yt'], dtype=np.float32)
y = y.reshape(len(y), 1)
y = torch.tensor(y)
 
scaler_x = preprocessing.MinMaxScaler(feature_range=(-1, 1))
scaler_y = preprocessing.MinMaxScaler(feature_range=(-1, 1))
 
x = scaler_x.fit_transform(x)
y = scaler_y.fit_transform(y)
 
train_end = 5241
 
x_train = torch.tensor(x[0:train_end, ], dtype=torch.float32)
y_train = torch.tensor(y[0:train_end, ], dtype=torch.float32)
x_test = torch.tensor(x[train_end + 1:-1], dtype=torch.float32)
y_test = torch.tensor(y[train_end + 1:-1], dtype=torch.float32)
 
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))
 
seed = 2019
np.random.seed(seed)
 
 
class GRUModel(nn.Module):
    def __init__(self):
        super(GRUModel, self).__init__()
        self.gru = nn.GRU(input_size=1, hidden_size=32, num_layers=1)
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, 1)
 
    def forward(self, x):
        out, _ = self.gru(x)
        out = self.fc1(out[:,-1,:])
        out = self.fc(self.act1(out))
        out = self.dense(self.act2(out))
        return out
 
 
model = GRUModel()
torch.save(model, 'gru_tem.pt')
# model = torch.load('gru2.pt')
 
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size = 48
num_epochs = 50
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
        
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
y_test = scaler_y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = scaler_y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))
y_test = np.ravel(y_test)
predictions = np.ravel(predictions)
fit = np.polyfit(y_test, predictions, 1)
fit_line = np.polyval(fit, y_test)
plt.scatter(y_test, predictions, label='Data')
plt.plot(y_test, fit_line, color='red', label='Fit Line')
plt.xlabel("real_value")
plt.ylabel("prediction_value")
 
# 设置图标题和显示R方
plt.title(f"Scatter Map (R-squared = {r2:.4f})")
 
# 显示图例
plt.legend()
 
# 显示图形
plt.show()

---------------------------------------------------------------------------------------------------------------------------------

多变量GRU模型预测:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
 
loc = 'data.csv'
data_csv = pd.read_csv(loc).values
 
data_csv = data_csv[0:,1:]
print(data_csv.shape)
data_csv
 
x_t = data_csv[0:,1:]
ss_X = preprocessing.MinMaxScaler().fit(x_t)
x_ss_t = ss_X.transform(x_t)
print(x_ss_t)
y_t = data_csv[0:,0]
y_t = y_t.reshape(len(y_t),1)
ss_Y = preprocessing.MinMaxScaler().fit(y_t)
y_ss_t = ss_Y.transform(y_t)
print(y_ss_t)
data_csv_ss = np.concatenate((y_ss_t,x_ss_t),axis=1)
print(data_csv_ss.shape)
print(data_csv_ss)
 
def split_data(data,timestep,input_size):
    dataX = []
    dataY = []
    
    for index in range(len(data) - timestep):
        dataX.append(data[index+1:index+timestep+1][:,1:])
        dataY.append(data[index][0])
    dataX = np.array(dataX)
    dataY = np.array(dataY)
    print(dataX.shape)
    print(dataX)
    print(dataY.shape)
    train_size = 5232
    
    x_train = dataX[:train_size,:,:].reshape(-1,timestep,input_size)
    y_train = dataY[:train_size].reshape(-1,1)
    
    
    x_test = dataX[train_size:,:,:].reshape(-1,timestep,input_size)
    y_test = dataY[train_size:].reshape(-1,1)
    
    return [x_train,y_train,x_test,y_test]
 
timestep = 24
input_size = 6
 
x_train,y_train,x_test,y_test = split_data(data_csv_ss,timestep,input_size)
 
x_train = torch.Tensor(x_train)
y_train = torch.Tensor(y_train)
x_test = torch.Tensor(x_test)
y_test = torch.Tensor(y_test)
 
class GRUModel(nn.Module):
    def __init__(self):
        super(GRUModel, self).__init__()
        self.gru = nn.GRU(input_size=6, hidden_size=32, num_layers=1)
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, 1)
 
    def forward(self, x):
        out, _ = self.gru(x)
        out = self.fc1(out[:,-1,:])
        out = self.fc(self.act1(out))
        out = self.dense(self.act2(out))
        return out
 
 
model = GRUModel()
# torch.save(model, 'gru2.pt')
# model = torch.load('gru2.pt')
 
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size = 48
num_epochs = 50
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
        
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
y_test = ss_Y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = ss_Y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))
rmse = np.sqrt(mean_squared_error(y_test, predictions))
print("RMSE:", rmse)
 
mae = mean_absolute_error(y_test, predictions)
print("MAE:", mae)
 
r2 = r2_score(y_test, predictions)
print("R2:", r2)
 
def calculate_mape(actual, predicted):
    if len(actual) != len(predicted):
        raise ValueError("actual and predicted lists must have the same length")
    if 0 in actual:
        raise ValueError("actual list must not contain zero values")
    
    percentage_errors = [abs((actual[i] - predicted[i]) / actual[i]) for i in range(len(actual))]
    mape = sum(percentage_errors) *100 / len(actual)
    return mape
mape = calculate_mape(y_test, predictions)
print("MAPE:", mape)
 
def calculate_IA(observed, predicted):
    numerator = np.sum((observed - predicted) ** 2)
    denominator = np.sum((np.abs(predicted - np.mean(observed)) + np.abs(observed - np.mean(observed))) ** 2)
    ia = 1 - (numerator / denominator)
    return ia
 
ia_value = calculate_IA(y_test, predictions)
print("IA值:", ia_value)
 
plt.plot(y_test)
plt.plot(predictions)
plt.legend('target', 'prediction')
plt.show()

多变量GRU-Attention:

import numpy as np
import pandas as pd
from torch.utils import data
import torch
from matplotlib import pyplot as plt
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
 
loc = 'data.csv'
data_csv = pd.read_csv(loc).values
 
data_csv = data_csv[0:,1:]
data_csv
 
x_t = data_csv[0:,1:]
ss_X = preprocessing.MinMaxScaler().fit(x_t)
x_ss_t = ss_X.transform(x_t)
y_t = data_csv[0:,0]
y_t = y_t.reshape(len(y_t),1)
ss_Y = preprocessing.MinMaxScaler().fit(y_t)
y_ss_t = ss_Y.transform(y_t)
 
data_csv_ss = np.concatenate((y_ss_t,x_ss_t),axis=1)
 
def split_data(data,timestep,input_size):
    dataX = []
    dataY = []
    
    for index in range(len(data) - timestep):
        dataX.append(data[index+1:index+timestep+1][:,1:])
        dataY.append(data[index][0])
    dataX = np.array(dataX)
    dataY = np.array(dataY)
    print(dataX.shape)
    print(dataX)
    print(dataY.shape)
    train_size = 5232
    
    x_train = dataX[:train_size,:,:].reshape(-1,timestep,input_size)
    y_train = dataY[:train_size].reshape(-1,1)
    
    
    x_test = dataX[train_size:,:,:].reshape(-1,timestep,input_size)
    y_test = dataY[train_size:].reshape(-1,1)
    
    return [x_train,y_train,x_test,y_test]
 
timestep = 24
input_size = 6
 
x_train,y_train,x_test,y_test = split_data(data_csv_ss,timestep,input_size)
 
x_train = torch.Tensor(x_train)
y_train = torch.Tensor(y_train)
x_test = torch.Tensor(x_test)
y_test = torch.Tensor(y_test)
 
class GRUAttention(nn.Module):
    def __init__(self, input_size, hidden_size, attention_size, output_size):
        super(GRUAttention, self).__init__()
        self.hidden_size = hidden_size
        
        # 定义 GRU 层
        self.gru = nn.GRU(input_size, hidden_size, batch_first=True)
        
        # 定义自注意力层
        self.query = nn.Linear(hidden_size, attention_size)
        self.key = nn.Linear(hidden_size, attention_size)
        self.energy = nn.Linear(attention_size,1)
        
        self.fc1 = nn.Linear(32, 16)
        self.act1 = nn.Tanh()
        self.fc3 = nn.Linear(16, 4)
        self.act2 = nn.Tanh()
        self.dense = nn.Linear(4, output_size)
 
    def forward(self, x):
        # GRU 步骤
        hidden, _ = self.gru(x)
        
        # 自注意力步骤
        query = self.query(hidden)
        key = self.key(hidden)
        energy = self.energy(torch.tanh(query + key))
        attention_weights = torch.softmax(energy, dim=1)
        attended_hidden = torch.sum(hidden * attention_weights, dim=1)
        
        out = self.fc1(attended_hidden)
        out = self.act1(out)
        out = self.fc3(out)
        # 全连接层
        out = self.dense(self.act2(out))
        
        return out
    
input_size = 6
hidden_size = 32
attention_size = 32
output_size = 1
 
model = GRUAttention(input_size, hidden_size, attention_size, output_size)
 
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
 
batch_size = 48
num_epochs = 50
for epoch in range(num_epochs):
    for i in range(0, len(x_train), batch_size):
        batch_x = x_train[i:i + batch_size]
        batch_y = y_train[i:i + batch_size]
        optimizer.zero_grad()
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)
        loss.backward()
        optimizer.step()
    
    print('Epoch: %d, Loss: %f' % (epoch, float(loss)))
 
model.eval()
with torch.no_grad():
    outputs_train = model(x_train)
score_train = criterion(outputs_train, y_train).item()
 
with torch.no_grad():
    outputs_test = model(x_test)
score_test = criterion(outputs_test, y_test).item()
 
print('In Train MSE=', round(score_train, 5))
print('In Test MSE=', round(score_test, 5))
 
y_test = ss_Y.inverse_transform(np.array(y_test).reshape((len(y_test), 1)))
predictions = model(x_test).detach().numpy()
predictions = ss_Y.inverse_transform(np.array(predictions).reshape((len(predictions), 1)))
 
rmse = np.sqrt(mean_squared_error(y_test, predictions))
 
print("RMSE:", rmse)
 
mae = mean_absolute_error(y_test, predictions)
print("MAE:", mae)
 
r2 = r2_score(y_test, predictions)
print("R2:", r2)
 
def calculate_mape(actual, predicted):
    if len(actual) != len(predicted):
        raise ValueError("actual and predicted lists must have the same length")
    if 0 in actual:
        raise ValueError("actual list must not contain zero values")
    
    percentage_errors = [abs((actual[i] - predicted[i]) / actual[i]) for i in range(len(actual))]
    mape = sum(percentage_errors) * 100 / len(actual)
    return mape
mape = calculate_mape(y_test, predictions)
print("MAPE:", mape)
 
def calculate_IA(observed, predicted):
    numerator = np.sum((observed - predicted) ** 2)
    denominator = np.sum((np.abs(predicted - np.mean(observed)) + np.abs(observed - np.mean(observed))) ** 2)
    ia = 1 - (numerator / denominator)
    return ia
 
ia_value = calculate_IA(y_test, predictions)
print("IA值:", ia_value)
 
plt.plot(y_test)
plt.plot(predictions)
plt.legend('target', 'prediction')
plt.show()
y_test = np.ravel(y_test)
predictions = np.ravel(predictions)
fit = np.polyfit(y_test, predictions, 1)
fit_line = np.polyval(fit, y_test)
plt.scatter(y_test, predictions, label='Data')
plt.plot(y_test, fit_line, color='red', label='Fit Line')
plt.xlabel("real_value")
plt.ylabel("prediction_value")
 
# 设置图标题和显示R方
plt.title(f"Scatter Map (R-squared = {r2:.4f})")
 
# 显示图例
plt.legend()
 
# 显示图形
plt.show()

 



标签:基于,机器,train,self,torch,np,test,序列,yt
From: https://www.cnblogs.com/aixiaowang/p/17897124.html

相关文章

  • 开源机器学习版本的Github:Hugging Face
    参考:https://baijiahao.baidu.com/s?id=1776478347325976510https://zhuanlan.zhihu.com/p/535100411 ===============================   ......
  • Guardrails for Amazon Bedrock 基于具体使用案例与负责任 AI 政策实现定制式安全保障
    作为负责任的人工智能(AI)战略的一部分,您现在可以使用 GuardrailsforAmazonBedrock(预览版),实施专为您的用例和负责任的人工智能政策而定制的保障措施,以此促进用户与生成式人工智能应用程序之间的安全交互。亚马逊云科技开发者社区为开发者们提供全球的开发技术资源。这里有技术......
  • 谈谈企业级 Angular 应用的二次开发 - 基于 Angular Component 替换的 Extensibility
    我们知道面向个人用户(toCustomer,简称2C)软件和面向企业级用户(toBusiness,简称2B)的软件,在设计和实现上都存在一些区别,比如个人软件通常注重直观的用户界面和简单易用的设计,其中用户体验是关键,因为个人软件的目标是满足个人用户的需求和偏好。想想我们每天都在刷的抖音和头......
  • 【算法】【线性表】最长连续序列
    1 题目给定一个未排序的整数数组num,找出最长连续序列的长度。样例1:输入:num=[100,4,200,1,3,2]输出:4解释:这个最长的连续序列是[1,2,3,4].返回所求长度42 解答publicclassSolution{/***@paramnum:Alistofintegers*@......
  • 机器学习-线性回归-样本归一化处理-05
    目录1.为什么要对样本进行归一化2.归一化的方式一最大最小值3.归一化的方式二标准归一化1.为什么要对样本进行归一化样本之间的数量级是千差万别有量纲的例如:theta1>>theta2数值小的theta2反而能快速的收敛数值大的theta1收敛较慢出现theta2等待theta......
  • 机器学习-线性回归-小批量-梯度下降法-04
    1.随机梯度下降法梯度计算的时候随机抽取一条importnumpyasnpX=2*np.random.rand(100,1)y=4+3*X+np.random.randn(100,1)X_b=np.c_[np.ones((100,1)),X]n_epochs=10000learn_rate=0.001m=100theta=np.random.randn(2,1)forepoch......
  • 机器学习-线性回归-梯度下降法-03
    1.梯度下降法梯度:是一个theta与一条样本x组成的映射公式可以看出梯度的计算量主要来自于左边部分所有样本参与--批量梯度下降法随机抽取一条样本参与--随机梯度下降法一小部分样本参与--小批量梯度下降法2.epoch与batchepoch:一次迭代wt-->wt+1......
  • 机器学习-线性回归-模型解析解-02
    1.解析解解析解的公式importnumpyasnpimportmatplotlib.pyplotasplt#有监督机器学习#XyX=2*np.random.rand(100,1)#np.random.rand#100行1列的[0,1)之间均匀分布*2之后则变成[0,2)之间均匀分布e=np.random.randn(100,1)#误差均值0......
  • 个人微信号机器人开发
    简要描述:获取收藏详细信息请求URL:http://域名地址/weChatFavorites/getFavItem请求方式:POST请求头Headers:Content-Type:application/jsonAuthorization:login接口返回参数:参数名必选类型说明wId是String微信实列IDfavId是int收藏标识返回数据:......
  • 基于Java 的商城网站系统设计与实现(8000字论文)
    摘要随着我国经济活力的不断提升和互联网的快速发展,信息的重要性正在显现出来。电子商务作为经济发展的重要一环取得了突飞猛进的发展。由于具有高效便捷的优点,网上购物已经成为一种不可或缺的新型生活方式,近年来各大互联网企业纷纷布局电子商务,获得了巨大成功。而对于这些平台来......