朴素贝叶斯是对数据属性之间的关系进行了假设,即各个属性维度之间独立。
NB中我们假设$X$是离散的,服从多项分布(包括伯努利)。GDA的$X$可以用多维高斯分布表示,但是在NB中我们却不能直接使用多项分布。我们用垃圾邮件分类器来阐述NB的思想。
在这个分类器中我们可以用单词向量作为输入特征,具体的,我们的单词书中如果一共有50000个词,那么一封邮件的x向量可以是
$$x=\left[\begin{matrix}1\0\0\\cdot\\cdot\1\\cdot\\cdot\0\end{matrix}\right]\begin{matrix}a\aardvark\aardwolf\\cdot\\cdot\buy\\cdot\\cdot\zen\end{matrix}$$
$x$是一个$50000$维的向量,在这封邮件中如果存在字典中的词,那该词所在的位置设置为$1$;否则为$0$。
如果要直接用多项分布对$p(x|y)$建模, $p(x|y)$共有$2^{50000}$个不同的值,那么我们至少需要$2^{50000}−1$个参数使参数和为$1$,对如此多的参数进行估计是不现实的,所以我们做一个强假设来简化概率模型。
因为每一维度都有$0,1$两种可能,因此就有$2^{50000}$种组合
作者:[rushshi]
$$
\begin{gathered}
\left{(x_{i},y_{i})\right}{i=1}^{N},x{i}\in \mathbb{R}^{p},y_{i}\in \left{0,1\right}
\end{gathered}
$$
朴素贝叶斯假设每一个维度都是独立的,则有
$$
\begin{aligned}
p(x_{1},\cdots ,x_{p}|y)&=p(x_{1}|y)p(x_{2}|y,x_{1})\cdots p(x_{p}|y,x_{1},\cdots ,x_{p-1})\
&根据朴素贝叶斯假设各个维度独立\
&=p(x_{1}|y)p(x_{2}|y)\cdots p(x_{p}|y)\
&=\prod\limits_{j=1}^{p}p(x_{j}|y)
\end{aligned}
$$
这里需要先假设
$$
\begin{aligned}
y &\sim B(1,\phi_{y})\
&\Rightarrow p(y)=\phi^{y}(1-\phi)^{1-y}\
p(x_{j}=1|y=0)&=\phi_{j|y=0}\
p(x_{j}=1|y=1)&=\phi_{j|y=1}\
\phi_{j|y}&=\phi_{j|y=1}^{y}\phi_{j|y=0}^{1-y}\
p(x_{j}|y)&=\phi_{j|y}^{x_{j}}(1-\phi_{j|y})^{1-x_{j}}
\end{aligned}
$$
对数似然函数
$$
\begin{aligned}
L(\phi_{y},\phi_{j|y=0},\phi_{j|y=1})&=\log \prod\limits_{i=1}^{N}p(x_{i},y_{i})\
&=\log \prod\limits_{i=1}^{N}p(x_{i}|y_{i})p(y_{i})\
&=\log \prod\limits_{i=1}^{N}\left(\prod\limits_{j=1}^{p}p(x_{ij}|y_{i})\right) p(y_{i})\
&=\sum\limits_{i=1}^{N}\left[\log p(y_{i})+\sum\limits_{j=1}^{p}\log p(x_{ij}|y_{i})\right]\
&=\sum\limits_{i=1}^{N}\left[\underbrace{y_{i}\log \phi_{y}+(1-y_{i})\log (1-\phi_{y})}{(1)}+\underbrace{\sum\limits{j=1}^{p}[(x_{ij}\log \phi_{j|y_{i}})+(1-x_{ij})\log (1-\phi_{j|y_{i}})]}_{(2)}\right]
\end{aligned}
$$
对于$\phi_{j|y=0}$有
$$
\begin{aligned}
(2)&=\sum\limits_{j=1}^{p}[(x_{ij}\log \phi_{j|y_{i}})+(1-x_{ij})\log (1-\phi_{j|y_{i}})]\
&=\sum\limits_{j=1}^{p}[x_{ij}\log \phi_{j|y=0}1\left{y_{i}=0\right}+(1-x_{ij})\log(1- \phi_{j|y=0})1\left{y_{i}=0\right}]\
\frac{\partial (2)}{\partial \phi_{j|y=0}}&=\sum\limits_{j=1}^{p}\left[x_{ij} \frac{1}{\phi_{j|y=0}}1\left{y_{i}=0\right}-\left(1-x_{ij}\right) \frac{1}{1-\phi_{j|y=0}}1\left{y_{i}=0\right}\right]=0\
0&=\sum\limits_{j=1}^{p}[(x_{ij}-\phi_{j|y=0})1\left{y_{i}=0\right}]\
0&=\sum\limits_{j=1}^{p}(x_{ij}\cdot 1\left{y_{i}=0\right})-\phi_{j|y=0}\sum\limits_{j=1}^{p}1 \left{y_{i}=0\right}\
0&=\sum\limits_{j=1}^{p}1\left{x_{ij}=1\land y_{i}=0\right}-\phi_{j|y=0}\sum\limits_{j=1}^{p}1\left{y_{i}=0\right}\
\widehat{\phi_{j|y=0}}&=\frac{\sum\limits_{j=1}^{p}1\left{x_{ij}=1 \land y_{i}=0\right}}{\sum\limits_{j=1}^{p}1\left{y_{i}=0\right}}
\end{aligned}
$$
指示函数
$$1_{A}(x)=\left{\begin{aligned}&1&x \in A\&0&x \notin A\end{aligned}\right.$$
也可记作$I_{A}(x),X_{A}(x)$
这里的指示函数在GDA中有类似的代替,即
$$\begin{gathered}C_{1}=\left{x_{i}|y_{i}=1,i=1,2,\cdots,N\right},|C_{1}|=N_{1}\C_{0}=\left{x_{i}|y_{i}=0,i=1,2,\cdots,N\right},|C_{0}|=N_{0}\\sum\limits_{x_{i}\in C_{1}},\sum\limits_{x_{i}\in C_{0}}\end{gathered}$$
$\widehat{\phi_{j|y=0}}$可以理解为$y=0$的样本中$x$维度为$1$的数量除以$y=0$的样本个数
同理可得$\widehat{\phi_{j|y=1}}$
$$
\widehat{\phi_{j|y=1}}=\frac{\sum\limits_{j=1}^{p}1\left{x_{ij}=1\land y_{i}=1\right}}{\sum\limits_{j=1}^{p}1\left{y_{i}=1\right}}
$$
对于$\phi_{y}$
$$
\begin{aligned}
(1)&=\sum\limits_{i=1}^{N}[y_{i}\log \phi_{y}+(1-y_{i})\log (1-\phi_{y})]\
\frac{\partial (1)}{\partial \phi_{y}}&=\sum\limits_{i=1}^{N}\left[y_{i} \frac{1}{\phi_{y}}-\left(1-y_{i}\right) \frac{1}{1-\phi_{y}}\right]=0\
0&=\sum\limits_{i=1}^{N}[y_{i}(1-\phi_{y})-(1-y_{i})\phi_{y}]\
0&=\sum\limits_{i=1}^{N}(y_{i}-\phi_{y})\
\hat{\phi_{y}}&=\frac{\sum\limits_{i=1}^{N}1\left{y_{i}=1\right}}{N}
\end{aligned}
$$
这里假设$x$只能等于$0,1$,但实际上$x$常常服从于类别分布,实际上思路相同,只是估计参数变多,这里不进行推导
标签:phi,right,Classifer,limits,Naive,分类器,ij,sum,left From: https://blog.51cto.com/u_15767241/5748781