这个项目包含了吴恩达机器学习ex2的python实现,主要知识点为逻辑回归、正则化,题目内容可以查看数据集中的ex2.pdf
代码来自网络(原作者黄广海的github),添加了部分对于题意的中文翻译,以及修改成与习题一致的结构,方便大家理解
另,原来代码中的高次项有些问题,以及缺少作图部分,已经修改补全
其余练习的传送门
ex1:线性回归
ex2:逻辑回归、正则化
ex3:多类别逻辑回归、神经网络
ex4:反向传播神经网络
ex5:偏差和方差、训练集&验证集&测试集
ex6:支持向量机
ex7:K-means和PCA(主要成分分析)
ex8:异常检测和推荐系统(协同过滤)
1 逻辑回归¶
在训练的初始阶段,我们将要构建一个逻辑回归模型来预测,某个学生是否被大学录取。
设想你是大学相关部分的管理者,想通过申请学生两次测试的评分,来决定他们是否被录取。
现在你拥有之前申请学生的可以用于训练逻辑回归的训练样本集。对于每一个训练样本,你有他们两次测试的评分和最后是被录取的结果。
1.1 数据可视化¶
In [1]:import numpy as np import pandas as pd import matplotlib.pyplot as plt
/opt/conda/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds) /opt/conda/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds)In [2]:
path = '/home/kesci/input/andrew_ml_ex22391/ex2data1.txt' data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted']) data.head()Out[2]:
Exam 1 | Exam 2 | Admitted | |
---|---|---|---|
0 | 34.623660 | 78.024693 | 0 |
1 | 30.286711 | 43.894998 | 0 |
2 | 35.847409 | 72.902198 | 0 |
3 | 60.182599 | 86.308552 | 1 |
4 | 79.032736 | 75.344376 | 1 |
positive = data[data['Admitted'].isin([1])] negative = data[data['Admitted'].isin([0])] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted') ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted') ax.legend() ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') plt.show()
1.2 实现¶
1.2.1 sigmoid 函数¶
逻辑回归函数为
hθ=g(θTx)g代表一个常用的逻辑函数(logistic function)为S形函数(Sigmoid function),公式为:
g(z)=11+e−z
合起来,我们得到逻辑回归模型的假设函数:
# 实现sigmoid函数 def sigmoid(z): return 1 / (1 + np.exp(-z))
1.2.2 代价函数和梯度¶
代价函数:
J(θ)=1m∑i=1m[−y(i)log(hθ(x(i)))−(1−y(i))log(1−hθ(x(i)))]
梯度:
虽然这个梯度和前面线性回归的梯度很像,但是要记住$h_\theta(x)$是不一样的
实现完成后,用初始$\theta$代入计算,结果应该是0.693左右
# 实现代价函数 def cost(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) return np.sum(first - second) / (len(X))
初始化X,y,$\theta$
In [6]:# 加一列常数列 data.insert(0, 'Ones', 1) # 初始化X,y,θ cols = data.shape[1] X = data.iloc[:,0:cols-1] y = data.iloc[:,cols-1:cols] theta = np.zeros(3) # 转换X,y的类型 X = np.array(X.values) y = np.array(y.values)In [7]:
# 检查矩阵的维度 X.shape, theta.shape, y.shapeOut[7]:
((100, 3), (3,), (100, 1))In [8]:
# 用初始θ计算代价 cost(theta, X, y)Out[8]:
0.6931471805599453In [9]:
# 实现梯度计算的函数(并没有更新θ) def gradient(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) grad[i] = np.sum(term) / len(X) return grad
1.2.3 用工具库计算θ的值¶
在此前的线性回归中,我们自己写代码实现的梯度下降(ex1的2.2.4的部分)。当时我们写了一个代价函数、计算了他的梯度,然后对他执行了梯度下降的步骤。这次,我们不自己写代码实现梯度下降,我们会调用一个已有的库。这就是说,我们不用自己定义迭代次数和步长,功能会直接告诉我们最优解。
andrew ng在课程中用的是Octave的“fminunc”函数,由于我们使用Python,我们可以用scipy.optimize.fmin_tnc做同样的事情。
(另外,如果对fminunc有疑问的,可以参考下面这篇百度文库的内容https://wenku.baidu.com/view/2f6ce65d0b1c59eef8c7b47a.html )
如果一切顺利的话,最有θ对应的代价应该是0.203
import scipy.optimize as opt result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y)) resultOut[10]:
(array([-25.16131863, 0.20623159, 0.20147149]), 36, 0)
让我们看看在这个结论下代价函数计算结果是什么个样子~
In [11]:# 用θ的计算结果代回代价函数计算 cost(result[0], X, y)Out[11]:
0.20349770158947458
画出决策曲线
In [12]:plotting_x1 = np.linspace(30, 100, 100) plotting_h1 = ( - result[0][0] - result[0][1] * plotting_x1) / result[0][2] fig, ax = plt.subplots(figsize=(12,8)) ax.plot(plotting_x1, plotting_h1, 'y', label='Prediction') ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted') ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted') ax.legend() ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') plt.show()
1.2.4 评价逻辑回归模型¶
在确定参数之后,我们可以使用这个模型来预测学生是否录取。如果一个学生exam1得分45,exam2得分85,那么他录取的概率应为0.776
In [13]:# 实现hθ def hfunc1(theta, X): return sigmoid(np.dot(theta.T, X)) hfunc1(result[0],[1,45,85])Out[13]:
0.7762906238162848
另一种评价θ的方法是看模型在训练集上的正确率怎样。写一个predict的函数,给出数据以及参数后,会返回“1”或者“0”。然后再把这个predict函数用于训练集上,看准确率怎样。
In [14]:# 定义预测函数 def predict(theta, X): probability = sigmoid(X * theta.T) return [1 if x >= 0.5 else 0 for x in probability]In [15]:
# 统计预测正确率 theta_min = np.matrix(result[0]) predictions = predict(theta_min, X) correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)] accuracy = (sum(map(int, correct)) % len(correct)) print ('accuracy = {0}%'.format(accuracy))
accuracy = 89%
画出对应曲线
2 正则化逻辑回归¶
在训练的第二部分,我们将实现加入正则项提升逻辑回归算法。
设想你是工厂的生产主管,你有一些芯片在两次测试中的测试结果,测试结果决定是否芯片要被接受或抛弃。你有一些历史数据,帮助你构建一个逻辑回归模型。
2.1 数据可视化¶
In [16]:path = '/home/kesci/input/andrew_ml_ex22391/ex2data2.txt' data_init = pd.read_csv(path, header=None, names=['Test 1', 'Test 2', 'Accepted']) data_init.head()Out[16]:
Test 1 | Test 2 | Accepted | |
---|---|---|---|
0 | 0.051267 | 0.69956 | 1 |
1 | -0.092742 | 0.68494 | 1 |
2 | -0.213710 | 0.69225 | 1 |
3 | -0.375000 | 0.50219 | 1 |
4 | -0.513250 | 0.46564 | 1 |
positive2 = data_init[data_init['Accepted'].isin([1])] negative2 = data_init[data_init['Accepted'].isin([0])] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted') ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected') ax.legend() ax.set_xlabel('Test 1 Score') ax.set_ylabel('Test 2 Score') plt.show()
以上图片显示,这个数据集不能像之前一样使用直线将两部分分割。而逻辑回归只适用于线性的分割,所以,这个数据集不适合直接使用逻辑回归。
2.2 特征映射¶
一种更好的使用数据集的方式是为每组数据创造更多的特征。所以我们为每组$x_1,x_2$添加了最高到6次幂的特征
In [18]:degree = 6 data2 = data_init x1 = data2['Test 1'] x2 = data2['Test 2'] data2.insert(3, 'Ones', 1) for i in range(1, degree+1): for j in range(0, i+1): data2['F' + str(i-j) + str(j)] = np.power(x1, i-j) * np.power(x2, j) #此处原答案错误较多,已经更正 data2.drop('Test 1', axis=1, inplace=True) data2.drop('Test 2', axis=1, inplace=True) data2.head()Out[18]:
Accepted | Ones | F10 | F01 | F20 | F11 | F02 | F30 | F21 | F12 | ... | F23 | F14 | F05 | F60 | F51 | F42 | F33 | F24 | F15 | F06 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 1 | 0.051267 | 0.69956 | 0.002628 | 0.035864 | 0.489384 | 0.000135 | 0.001839 | 0.025089 | ... | 0.000900 | 0.012278 | 0.167542 | 1.815630e-08 | 2.477505e-07 | 0.000003 | 0.000046 | 0.000629 | 0.008589 | 0.117206 |
1 | 1 | 1 | -0.092742 | 0.68494 | 0.008601 | -0.063523 | 0.469143 | -0.000798 | 0.005891 | -0.043509 | ... | 0.002764 | -0.020412 | 0.150752 | 6.362953e-07 | -4.699318e-06 | 0.000035 | -0.000256 | 0.001893 | -0.013981 | 0.103256 |
2 | 1 | 1 | -0.213710 | 0.69225 | 0.045672 | -0.147941 | 0.479210 | -0.009761 | 0.031616 | -0.102412 | ... | 0.015151 | -0.049077 | 0.158970 | 9.526844e-05 | -3.085938e-04 | 0.001000 | -0.003238 | 0.010488 | -0.033973 | 0.110047 |
3 | 1 | 1 | -0.375000 | 0.50219 | 0.140625 | -0.188321 | 0.252195 | -0.052734 | 0.070620 | -0.094573 | ... | 0.017810 | -0.023851 | 0.031940 | 2.780914e-03 | -3.724126e-03 | 0.004987 | -0.006679 | 0.008944 | -0.011978 | 0.016040 |
4 | 1 | 1 | -0.513250 | 0.46564 | 0.263426 | -0.238990 | 0.216821 | -0.135203 | 0.122661 | -0.111283 | ... | 0.026596 | -0.024128 | 0.021890 | 1.827990e-02 | -1.658422e-02 | 0.015046 | -0.013650 | 0.012384 | -0.011235 | 0.010193 |
5 rows × 29 columns
3.2 代价函数和梯度¶
这一部分要实现计算逻辑回归的代价函数和梯度的函数。代价函数公式如下:
J(θ)=1m∑i=1m[−y(i)log(hθ(x(i)))−(1−y(i))log(1−hθ(x(i)))]+λ2m∑j=1nθj2
记住$\theta_0$是不需要正则化的,下标从1开始。
梯度的第j个元素的更新公式为:
θj:=θj−a1m∑i=1m[hθ(x(i))−y(i)]xj(i)+λmθj
对上面的算法中 j=1,2,...,n 时的更新式子进行调整可得:
把初始$\theta$(所有元素为0)带入,代价应为0.693
# 实现正则化的代价函数 def costReg(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) reg = (learningRate / (2 * len(X))) * np.sum(np.power(theta[:,1:theta.shape[1]], 2)) return np.sum(first - second) / len(X) + regIn [20]:
# 实现正则化的梯度函数 def gradientReg(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) if (i == 0): grad[i] = np.sum(term) / len(X) else: grad[i] = (np.sum(term) / len(X)) + ((learningRate / len(X)) * theta[:,i]) return gradIn [21]:
# 初始化X,y,θ cols = data2.shape[1] X2 = data2.iloc[:,1:cols] y2 = data2.iloc[:,0:1] theta2 = np.zeros(cols-1) # 进行类型转换 X2 = np.array(X2.values) y2 = np.array(y2.values) # λ设为1 learningRate = 1In [22]:
# 计算初始代价 costReg(theta2, X2, y2, learningRate)Out[22]:
0.6931471805599454
2.3.1 用工具库求解参数¶
In [23]:result2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate)) result2Out[23]:
(array([ 1.27271026, 0.62529965, 1.18111686, -2.01987398, -0.91743189, -1.43166928, 0.12393228, -0.36553118, -0.35725404, -0.17516291, -1.45817009, -0.05098418, -0.61558555, -0.27469165, -1.19271298, -0.24217841, -0.20603299, -0.04466178, -0.27778951, -0.29539514, -0.45645982, -1.04319155, 0.02779373, -0.2924487 , 0.0155576 , -0.32742405, -0.1438915 , -0.92467487]), 32, 1)
最后,我们可以使用第1部分中的预测函数来查看我们的方案在训练数据上的准确度。
In [24]:theta_min = np.matrix(result2[0]) predictions = predict(theta_min, X2) correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y2)] accuracy = (sum(map(int, correct)) % len(correct)) print ('accuracy = {0}%'.format(accuracy))
accuracy = 98%
2.4 画出决策的曲线¶
In [25]:def hfunc2(theta, x1, x2): temp = theta[0][0] place = 0 for i in range(1, degree+1): for j in range(0, i+1): temp+= np.power(x1, i-j) * np.power(x2, j) * theta[0][place+1] place+=1 return tempIn [26]:
def find_decision_boundary(theta): t1 = np.linspace(-1, 1.5, 1000) t2 = np.linspace(-1, 1.5, 1000) cordinates = [(x, y) for x in t1 for y in t2] x_cord, y_cord = zip(*cordinates) h_val = pd.DataFrame({'x1':x_cord, 'x2':y_cord}) h_val['hval'] = hfunc2(theta, h_val['x1'], h_val['x2']) decision = h_val[np.abs(h_val['hval']) < 2 * 10**-3] return decision.x1, decision.x2In [27]:
fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted') ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected') ax.set_xlabel('Test 1 Score') ax.set_ylabel('Test 2 Score') x, y = find_decision_boundary(result2) plt.scatter(x, y, c='y', s=10, label='Prediction') ax.legend() plt.show()
2.5 改变λ,观察决策曲线¶
$\lambda=0$时过拟合
In [28]:learningRate2 = 0 result3 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate2))Out[28]:
(array([ 14.60193336, 21.20326682, 4.60748805, -150.30636263, -70.51421716, -65.71761632, -167.22986423, -100.93094956, -58.4583472 , 9.35117823, 538.72438097, 445.25267052, 633.43046793, 239.567217 , 92.6608774 , 300.20568543, 362.78215934, 440.45538844, 196.63024035, 52.26698467, -13.32416223, -639.85098768, -782.82561038, -1230.55113233, -846.7737968 , -793.91524305, -273.62741174, -51.4586635 ]), 280, 3)In [29]:
fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted') ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected') ax.set_xlabel('Test 1 Score') ax.set_ylabel('Test 2 Score') x, y = find_decision_boundary(result3) plt.scatter(x, y, c='y', s=10, label='Prediction') ax.legend() plt.show()
$\lambda=100$时欠拟合
In [30]:learningRate3 = 100 result4 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate3))In [31]:
fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted') ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected') ax.set_xlabel('Test 1 Score') ax.set_ylabel('Test 2 Score') x, y = find_decision_boundary(result4) plt.scatter(x, y, c='y', s=10, label='Prediction') ax.legend() plt.show()标签:吴恩达,python,label,Test,ex2,theta,np,ax,data From: https://www.cnblogs.com/presleyren/p/17212511.html