线性判别分析
线性判别分析中有降维,把数据都投影到同一条线上,然后在直线上取一个阈值,将直线分成两条射线,每一条代表一个分类。会损失一些数据信息,但如果这些信息是一些干扰信息,丢失也未尝不是好事。
线性判别分析之后的结果是一个向量,其他的不行吗?
主要指导思想(目标):类内小,类间大。
公式推导
我们得到的是向量,为了方便计算损失,不妨设\(||\pmb w||=1\),每一个数据$ \pmb X_i\(看作一个向量。那么\)\pmb X_i\pmb w\(是每个数据在\)\pmb w\(方向上的投影。与\)\pmb w$的其中一个平面是划分平面。
两个不同类别分别命名为\(C_1\)和\(C_2\),用\(\pmb\mu\),\(\pmb\mu_{C_1}\), \(\pmb\mu_{C_2}\)分别代表全部数据,\(C_1\)数据,\(C_2\)数据的均值,用\(\pmb\Sigma\),\(\pmb\Sigma_{C_1}\), \(\pmb\Sigma_{C_2}\)分别代表全部数据,\(C_1\)数据,\(C_2\)数据的协方差矩阵。
\(\tilde{\mu}\)和\(\tilde{\sigma}^2\)表示投影的均值和方差。
类间:\((\tilde\mu_{C_1}-\tilde\mu_{C_2})^2\)
类内:\(\tilde\sigma^2_{C_1}+\tilde\sigma^2_{C_2}\)
目标函数:\(J(\pmb \theta) = \frac{(\tilde\mu_{C_1}-\tilde\mu_{C_2})^2}{\tilde\sigma^2_{C_1}+\tilde\sigma^2_{C_2}}\)
\[{\LARGE \begin{array}{ccl} J(\pmb \theta) &=& \frac{(\tilde\mu_{C_1}-\tilde\mu_{C_2})^2}{\tilde\sigma^2_{C_1}+\tilde\sigma^2_{C_2}}\\ 分子&=&(\tilde\mu_{C_1}-\tilde\mu_{C_2})^2\\ &=&(\frac{1}{N_{C_1}} \sum_1^{N_{C_1}}\pmb X_{C_1i}\pmb \theta- \frac{1}{N_{C_2}} \sum_1^{N_{C_2}}\pmb X_{C_2i}\pmb \theta)^2\\ &=&((\pmb\mu_{C_1}-\pmb\mu_{C_2})\pmb \theta)^2\\ &=&\pmb \theta^T(\pmb\mu_{C_1}-\pmb\mu_{C_2})^T(\pmb\mu_{C_1}-\pmb\mu_{C_2})\pmb \theta\\ \tilde\sigma^2_{C_1} &=& \frac{1}{N_{C_1}} \sum_1^{N_{C_1}}(\pmb X_{C_1i}\pmb \theta -\tilde\mu_{C_1} )^2\\ &=& \frac{1}{N_{C_1}} \sum_1^{N_{C_1}}(\pmb X_{C_1i}\pmb \theta -\frac{1}{N_{C_1}} \sum_1^{N_{C_1}}\pmb X_{C_1i}\pmb \theta )^2\\ &=& \frac{1}{N_{C_1}} \sum_1^{N_{C_1}}((\pmb X_{C_1i} -\frac{1}{N_{C_1}} \sum_1^{N_{C_1}}\pmb X_{C_1i})\pmb \theta )^2\\ &=& \frac{1}{N_{C_1}} \sum_1^{N_{C_1}}((\pmb X_{C_1i} -\pmb\mu_{C_1})\pmb \theta )^2\\ &=& \frac{1}{N_{C_1}} \sum_1^{N_{C_1}}\pmb \theta^T(\pmb X_{C_1i} -\pmb\mu_{C_1})^T(\pmb X_{C_1i} -\pmb\mu_{C_1})\pmb \theta \\ &=& \pmb\theta^T(\frac{1}{N_{C_1}} \sum_1^{N_{C_1}} (\pmb X_{C_1i} -\pmb\mu_{C_1})^T(\pmb X_{C_1i} -\pmb\mu_{C_1}))\pmb \theta \\ &=& \pmb\theta^T\pmb\Sigma_{C_1}\pmb \theta \\ \tilde\sigma^2_{C_2} &=& \pmb\theta^T\pmb\Sigma_{C_2}\pmb \theta\\ 分母&=&\pmb\theta^T\pmb\Sigma_{C_1}\pmb \theta+\pmb\theta^T\pmb\Sigma_{C_2}\pmb \theta\\ &=&\pmb\theta^T(\pmb\Sigma_{C_1}+\pmb\Sigma_{C_2})\pmb \theta\\ \end{array} } \]\({\LARGE \therefore}\)
\( {\LARGE \begin{array}{ccl} J(\pmb \theta) &=& \frac{\pmb \theta^T(\pmb\mu_{C_1}-\pmb\mu_{C_2})^T(\pmb\mu_{C_1}-\pmb\mu_{C_2})\pmb \theta}{\pmb\theta^T(\pmb\Sigma_{C_1}+\pmb\Sigma_{C_2})\pmb \theta}\\ \end{array} } \)
设$ S_b = (\pmb\mu_{C_1}-\pmb\mu_{C_2})^T(\pmb\mu_{C_1}-\pmb\mu_{C_2})\(,\)S_w = \pmb\Sigma_{C_1}+\pmb\Sigma_{C_2}$
\(S_b\)就是类内方差
\(S_w\)就是类间方差
此时\({\LARGE J(\pmb \theta) = \frac{\pmb \theta^T S_b \pmb \theta}{\pmb\theta^TS_w\pmb \theta}}\)
求导
\[{\LARGE \begin{array}{rcl} \frac{\partial J(\pmb \theta)}{\partial \pmb\theta } &=& \frac{\partial\frac{\pmb \theta^T \pmb S_b \pmb \theta}{\pmb\theta^T\pmb S_w\pmb \theta}}{\partial\pmb\theta}\\ &=& \frac{\partial(\pmb \theta^T \pmb S_b \pmb \theta(\pmb\theta^T\pmb S_w\pmb \theta)^{-1})}{\partial\pmb\theta}\\ &=& \frac{\partial(\pmb \theta^T \pmb S_b \pmb \theta )}{\partial\pmb\theta}(\pmb\theta^T\pmb S_w\pmb \theta)^{-1}+\pmb \theta^T \pmb S_b \pmb \theta \frac{\partial((\pmb\theta^T\pmb S_w\pmb \theta)^{-1})}{\partial\pmb\theta}\\ &=&2\pmb\theta^T\pmb S_b(\pmb\theta^T\pmb S_w\pmb \theta)^{-1}+ \pmb \theta^T \pmb S_b \pmb \theta (- \frac{1}{(\pmb \theta^T \pmb S_w \pmb \theta )^2}) (2\pmb\theta^T\pmb S_w) \end{array} } \]令导数等于零
\[{\LARGE \begin{array}{rcl} \pmb 0 &=& 2\pmb S_b\pmb\theta(\pmb\theta^T\pmb S_w\pmb \theta)^{-1}+ \pmb \theta^T \pmb S_b \pmb \theta (- \frac{1}{(\pmb \theta^T \pmb S_w \pmb \theta )^2}) (2\pmb S_w\pmb\theta) \\ 2\pmb S_b\pmb\theta(\pmb\theta^T\pmb S_w\pmb \theta)^{-1}&=& \pmb \theta^T \pmb S_b \pmb \theta ( \frac{1}{(\pmb \theta^T \pmb S_w \pmb \theta )^2}) (2\pmb S_w\pmb\theta)\\ \pmb S_b\pmb\theta(\pmb\theta^T\pmb S_w\pmb \theta) &=& (\pmb \theta^T \pmb S_b \pmb \theta )\pmb S_w\pmb\theta\\ (\pmb \theta^T \pmb S_b \pmb \theta )\pmb S_w\pmb\theta &=& \pmb S_b\pmb\theta(\pmb\theta^T\pmb S_w\pmb \theta) \\ \pmb\theta&=& \pmb S_w^{-1}\frac{\pmb \theta^T \pmb S_w \pmb \theta } {\pmb\theta^T\pmb S_b\pmb \theta}\pmb S_b\pmb\theta\\ \pmb\theta &=& \pmb S_w^{-1}\frac{\pmb \theta^T \pmb S_w \pmb \theta } {\pmb\theta^T\pmb S_b\pmb \theta}(\pmb\mu_{C_1}-\pmb\mu_{C_2})^T(\pmb\mu_{C_1}-\pmb\mu_{C_2})\pmb\theta\\ \end{array} } \]\(\because\)
\(\frac{\pmb \theta^T \pmb S_w \pmb \theta }{\pmb\theta^T\pmb S_b\pmb \theta}\),\((\pmb\mu_{C_1}-\pmb\mu_{C_2})\pmb\theta\)是一个数,不影响\(\pmb \theta\)的方向
\({\LARGE \therefore}\)
\(
{\LARGE
\pmb\theta \propto \pmb S_w^{-1}(\pmb\mu_{C_1}-\pmb\mu_{C_2})^T
}
\)
\({\LARGE \mathcal{{\color{Blue} {if}} } } \pmb S_w \propto \pmb I\)
\({\LARGE \pmb \theta \propto (\pmb\mu_{C_1}-\pmb\mu_{C_2})^T }\)
求任意一个点的投影
\( {\Large proj_{\pmb \theta}(x) = x^T\pmb\theta } \)
求阈值
\( {\Large \begin{array}{rcl} threshold &=& \frac{N_{C_1}\tilde\mu_{C_1}+N_{C_2}\tilde\mu_{C_1}}{N_{C_1}+N_{C_2}}\\ &=& \frac{N_{C_1}\pmb\mu_{C_1}\pmb\theta+N_{C_2}\pmb\mu_{C_1}\pmb\theta}{N_{C_1}+N_{C_2}} \end{array} } \)
依赖
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
人工数据集
n = 100
X = np.random.multivariate_normal((1, 1), [[0.64, 0], [0, 0.64]], size = int(n/2))
X = np.insert(X, 50, np.random.multivariate_normal((3, 3), [[0.64, 0], [0,0.64]], size = int(n/2)),0)
#X = np.insert(X, 0, 1, 1)
m = X.shape[1]
y = np.array([1]*50+[-1]*50).reshape(-1,1)
plt.scatter(X[:50, -2], X[:50, -1])
plt.scatter(X[50:, -2], X[50:, -1], c = "#ff4400")
<matplotlib.collections.PathCollection at 0x7f2b50e680d0>
X1 = X[(y==1).reshape(-1)]
X0 = X[(y==-1).reshape(-1)]
n1 = np.array([[X1.shape[0]]])
n0 = np.array([[X0.shape[0]]])
mu1 = X1.mean(axis = 0).reshape(-1,1)
mu0 = X0.mean(axis = 0).reshape(-1,1)
Sigma1 = np.cov(X1.T)
Sigma0 = np.cov(X0.T)
theta = (Sigma1 + Sigma0) @ (mu1 - mu0)
threshold = (n1*mu1 + n0*mu0).T@theta/(n1 + n0)
def getForecast(x):
return x.T @ theta
threshold
array([[-10.45793931]])
预测
print(f'{ 1 if getForecast(np.array([[1],[1]])) > threshold else 0}')
1
分界展示
plt.scatter(X[:50, -2], X[:50, -1])
plt.scatter(X[50:, -2], X[50:, -1], c = "#ff4400")
for i in np.arange(-1,5,0.02):
for j in np.arange(-1,5,0.02):
if abs(getForecast(np.array([[i],[j]])) - threshold) <0.01:
plt.scatter(i,j,c="#000000")
标签:frac,sum,tilde,判别分析,mu,pmb,theta,线性,fisher
From: https://www.cnblogs.com/RanX2018/p/16742892.html