首页 > 其他分享 >逻辑回归

逻辑回归

时间:2022-10-22 11:34:08浏览次数:39  
标签:逻辑 回归 0.5 label prdict np theta data

 

 

import numpy as np
import matplotlib.pyplot as plt
def sigmod(x):
    return 1/(1+np.exp(-x))

label = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1], np.float32).reshape(16, 1)
data = np.array(
    [(0.5, 0.5), (0.5, 1.0), (0.5, 1.5), (0.8, 2.0), (0.6, 1.2), (0.9, 1.3), (1.2, 0.9), (1.5, 0.5) , 
     (1.1, 2.9), (1.5, 3.0), (1.6, 2.5), (1.8, 2.0), (1.9, 3.1), (2.3, 2.8), (2.3, 1.6), (2.9, 1.5)], 
np.float32)
data_num,data_dim = data.shape
new_data = np.ones((data_num,data_dim+1))
new_data[:,:2] =data
theta = np.random.normal(size=(data_dim+1,1))
lr = 1e-1

for i in range(10000):
    prdict = new_data@theta
    prdict = sigmod(prdict)
    loss = -np.sum(label*np.log(predict)+(1-label)*np.log(1-prdict))
#     loss = -np.sum(label * np.log(predict) + (1 - label) * np.log(1 - predict))
    d_theta = new_data.T@(prdict-label)
    theta -= lr*d_theta
    if i%1000==0:
        print(f"loss{loss}")

 

标签:逻辑,回归,0.5,label,prdict,np,theta,data
From: https://www.cnblogs.com/xiaoruirui/p/16815666.html

相关文章