损失函数推导
线性回归
首先损失函数是为了衡量模型预测的数据与真实数据之间的区别,那么问题来了为什么是平方损失,而不是绝对值损失,四次方损失。
一个很浅显的理解:二次方简单,导数是线性的且连续,而且离预测值远的值会被放大。
推导
假设模型已被训练到最佳,这时候与真实值必然会存在一些误差。比如房子供应商的心情突然好了,进行促销。这些都是不可知的,但所有的误差叠加到一起,
可以将其看做成一个高斯分布。也就是说所有点的误差\(\epsilon \sim \mathcal N(0,\sigma^2)\),
误差为x的概率\(P(\epsilon=x) = \frac{1}{\sqrt{2 \pi} \sigma }e^{\frac{x^2}{-2\sigma^2}}\)
所以每个\(\pmb X^i\)因为误差取得\(\pmb y^i\)的概率\(P(\pmb y^i|\pmb X^i) = \frac{1}{\sqrt{2 \pi} \sigma }e^{\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{-2\sigma^2}}\)。
以上都是对一训练好的模型来说的。
而对于\(\pmb \theta\)未确定来说,\(\pmb X^i\)因为误差取得\(\pmb y^i\)的概率\(P(\pmb y^i|\pmb X^i; \pmb \theta) = \frac{1}{\sqrt{2 \pi} \sigma }e^{\frac{(\pmb X^i \pmb \theta - \pmb y^i)^2}{-2\sigma^2}}\)
这时\(y^i\)都发生的概率应为\(P(\pmb y| \pmb X; \pmb \theta)\),将其设为\(\mathcal L(\pmb \theta)\)
如上式,所表示的是\(\pmb y\)发生的概率,只需要求能使得这个式子取得最大值的\(\pmb{\theta}\)即可,观此式,为积式,而且还有许多我们可以通过\(\log\)函数将其转化为和式并去掉指数函数。设\(\mathcal l(\pmb \theta) = \log (\mathcal L(\pmb \theta))\)
\[\begin{array}{rcl} \mathcal l(\pmb \theta) &=& \log (\mathcal L(\pmb \theta))\\ &=&\log(\prod_{i=1}^n \frac{1}{\sqrt{2 \pi} \sigma }e^{-\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}})\\ &=&\sum_{i=1}^n(\log( \frac{1}{\sqrt{2 \pi} \sigma })+\log(e^{-\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}}))\\ &=&n\log( \frac{1}{\sqrt{2 \pi} \sigma }) + \sum_{i=1}^n\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}\\ &=&n\log( \frac{1}{\sqrt{2 \pi} \sigma }) + \frac{\sum_{i=1}^n(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}\\ \end{array} \]由此得到了函数\(\sum_{i=1}^n(y^i-X^i\theta)^2\),即\((\pmb X \pmb \theta - \pmb y)^T(\pmb X \pmb \theta - \pmb y)\),且只需要求函数的最小值,就是求\(\mathcal L(\pmb\theta)\)的最大值。
即:
\(J(\pmb \theta) = (\pmb X \pmb \theta - \pmb y)^T(\pmb X \pmb \theta - \pmb y)\)
这时求其梯度得
\[\begin{array} {rcl} \frac{\partial J(\pmb \theta)}{\partial \pmb \theta} &=& \frac{\partial ((\pmb X \pmb \theta - \pmb y)^T(\pmb X \pmb \theta - \pmb y))}{\partial \pmb \theta}\\ &\overset{\pmb{x} = \pmb X\pmb\theta-\pmb y }{=}& (\frac{\partial\pmb x^T \pmb x}{\partial \pmb x})^T \frac{\partial (\pmb X \pmb\theta - \pmb y)}{\partial \pmb \theta}\\ &=& (2\pmb x)^T \pmb X\\ &=& 2(\pmb X\pmb\theta-\pmb y)^T \pmb X\\ \end{array} \]逻辑回归
\[h(\pmb X^i) = \frac{1}{1+e^{- \pmb X^i \pmb\theta}} \]求逻辑回归线性回归同理:
\[\begin{array}{rcl} \mathcal l(\pmb \theta) &=& \log (\mathcal L(\pmb \theta))\\ &=& \log (P(\pmb y | \pmb X; \pmb \theta))\\ &=& \log (\prod_{i=1}^n P(\pmb y^i|\pmb X^i; \pmb \theta))\\ &=& \log (\prod_{i=1}^n h_{\pmb \theta} (\pmb X )^{\pmb y^i} (1 - h_{\pmb \theta}(\pmb X))^{1-\pmb y^i})\\ &=& \sum_{i=1}^n(\pmb y^i\log(h_{\pmb \theta} (\pmb X ))+(1-\pmb y^i )\log(1 - h_{\pmb \theta}(\pmb X)))\\ \end{array} \]令$J(\pmb \theta) = - \mathcal l(\pmb \theta) $
对其求梯度得
\[\begin{array} {rcl} \frac{\partial J(\pmb \theta)}{\partial \pmb \theta} &=& \frac{-\sum_{i=1}^n(\pmb y^i\log(h_{\pmb \theta} (\pmb X ))+(1-\pmb y^i )\log(1 - h_{\pmb \theta}(\pmb X)))}{\partial \pmb \theta}\\ &=& -\sum_{i=1}^n\frac{\pmb y^i\log(h)+(1-\pmb y^i )\log(1 - h)}{\partial h} \frac{h(\pmb\theta)}{\partial \pmb \theta}\\ &=& -\sum_{i=1}^n(\frac{\pmb y^i}{h} - \frac{1 - \pmb y^i}{1 - h})h(1 - h)\pmb X^i\\ &=& -\sum_{i=1}^n(\pmb y^i(1-h) - h(1 - \pmb y^i))\pmb X^i\\ &=& \sum_{i=1}^n(h_{\pmb \theta }(\pmb X^i) - \pmb y^i)\pmb X^i\\ &=& ( \begin{bmatrix} \vdots \\ h_{\pmb \theta }(\pmb X^i)\\ \vdots\\ \end{bmatrix} - \pmb y)^T \pmb X\\ \end{array} \]为什么是sigmoid
在线性回归的时候我们假设了误差\(\epsilon\),将其视为一个高斯分布,得到每个\(\pmb y^i\)的概率,
为什么到了逻辑回归就不用假设,而是\(h_{\pmb \theta}\)直接得到的就是概率呢?是通过推导得出来的还是只是因为此函数比较优异?
见链接
标签:frac,函数,推导,损失,pmb,theta,sigma,partial,log From: https://www.cnblogs.com/Rannq2018/p/16716916.html