首页 > 其他分享 >【阅读】Diffusion Models - Introduction

【阅读】Diffusion Models - Introduction

时间:2023-01-04 18:33:33浏览次数:67  
标签:Diffusion varepsilon bar Models boldsymbol Introduction beta mathbf alpha

参考:
苏剑林. 生成扩散模型漫谈
What are Diffusion Models?


Diffusion process

They take the input image \(\mathbf{x}_0\)
and gradually add Gaussian noise to it through a series of \(T\) steps. We will call this the forward process.
Afterward, a neural network is trained to recover the original data by reversing the noising process. By being able to model the reverse process, we can generate new data. This is the so-called reverse diffusion process or, in general, the sampling process of a generative model.

Forward diffusion

给定 \(\mathbf{x}_0\),马尔科夫过程,添加高斯噪声:

\[\boldsymbol{x}_t = \sqrt{1 - \beta_t} \boldsymbol{x}_{t-1} + \beta_t \boldsymbol{\varepsilon}_t,\quad \boldsymbol{\varepsilon}_t\sim\mathcal{N}(\boldsymbol{0}, \boldsymbol{I}) \tag{1} \]

\[q(\mathbf{x}_t \vert \mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t; \mathbf{\mu_t}=\sqrt{1 - \beta_t} \mathbf{x}_{t-1}, \mathbf{\Sigma_t}=\beta_t^2\mathbf{I}) \]

其中每一步的 \(\beta_t>0\) 且很接近 \(0\),即单步添加的噪声都不大。至于为啥对均值用 \(\sqrt{1 - \beta_t^2}\),从后面的推导可以看出,这样做是为了最终的式子形如关于初始样本 \(\mathbf{x}_0\) 的正态分布,又即添加的噪声均值为零
后验概率为 \(q(\mathbf{x}_{1:T} \vert \mathbf{x}_0) = \prod^T_{t=1} q(\mathbf{x}_t \vert \mathbf{x}_{t-1})\),其中 \(q(\mathbf{x}_{1:T})\) 表示对 \(q\) 重复从 \(1\) 迭代到 \(T\),也称为 trajectory

The reparameterization trick

Tractable closed-form sampling at any timestep

进一步改写,考虑形如 \(z=μ+σϵ,\ ϵ∼N(0,1)\)
令 \(\alpha_t = \sqrt{1 - \beta_t^2}\),记 \(\bar{\alpha}_t = \prod_{i=1}^t \alpha_i,\ \bar{\beta}_t = \sqrt{1 - \bar{\alpha}_t^2}\):

\[\begin{aligned} \mathbf{x}_t &= \alpha_t\mathbf{x}_{t-1} + \sqrt{1 - \alpha_t^2}\boldsymbol{\epsilon}_{t-1} & \text{ ;where } \boldsymbol{\epsilon}_{t-1}, \boldsymbol{\epsilon}_{t-2}, \dots \sim \mathcal{N}(\mathbf{0}, \boldsymbol{I}) \\ &= \alpha_t \alpha_{t-1} \mathbf{x}_{t-2} + \sqrt{1 - (\alpha_t \alpha_{t-1})^2} \bar{\boldsymbol{\epsilon}}_{t-2} & \text{ ;where $\bar{\boldsymbol{\epsilon}}_{t-2}$ merges two Gaussians (*).} \\ &= \dots \\ &= \bar{\alpha}_t\mathbf{x}_0 + \bar{\beta}_t\boldsymbol{\epsilon} & \text{ ; $\boldsymbol{ϵ}\sim\mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$} \\ \end{aligned} \tag{2} \]

(*) Merging \(\mathcal{N}(\mathbf{0}, \sigma_1^2\mathbf{I})\) and \(\mathcal{N}(\mathbf{0}, \sigma_2^2\mathbf{I})\) leads to \(\mathcal{N}(\mathbf{0}, (\sigma_1^2 + \sigma_2^2)\mathbf{I})\)

将式子写成 \(\mathbf{x}_t\) 直接由 \(\mathbf{x}_0\) 得到的形式。一般地,我们会做到令 \(\bar{\alpha}_t\approx 0\),这意味着 \(\mathbf{x}_t\) 几乎已经是标准正态分布了

Variance schedule

The variance parameter \(\beta_t\) can be fixed to a constant or chosen as a schedule over the \(T\) timesteps. In fact, one can define a variance schedule, which can be linear, quadratic, cosine etc.

Reverse diffusion

考虑 \(\boldsymbol{x}_{t}\to \boldsymbol{x}_{t-1}\),假设生成的模型为 \(\boldsymbol{\mu}(\boldsymbol{x}_t)\),其形式考虑 \(\boldsymbol{x}_{t-1} = \frac{1}{\alpha_t}\left(\boldsymbol{x}_t - \beta_t \boldsymbol{\varepsilon}_t\right)\),故设计成:

\[\boldsymbol{\mu}(\boldsymbol{x}_t) = \frac{1}{\alpha_t}\left(\boldsymbol{x}_t - \beta_t \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\boldsymbol{x}_t, t)\right) \]

其中 \(t\) 也是一个参数,为什么将 \(t\) 引入模型的原因最后会提到,这个参数也需要我们采样获得;\(\boldsymbol{\theta}\) 是训练参数,而代价函数直接选择两者的欧氏距离 \(\left\Vert\boldsymbol{x}_{t-1} - \boldsymbol{\mu}(\boldsymbol{x}_t)\right\Vert^2\),代入会有:

\[\left\Vert\boldsymbol{x}_{t-1} - \boldsymbol{\mu}(\boldsymbol{x}_t)\right\Vert^2 = \frac{\beta_t^2}{\alpha_t^2}\left\Vert \boldsymbol{\varepsilon}_t - \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\boldsymbol{x}_t, t)\right\Vert^2 \]

把 \(\boldsymbol{x}_t\) 化掉,结合 \((1)(2)\),有(注意到需要把 \(\bar{\boldsymbol{\varepsilon}}_t\) 拆成 \(\bar{\boldsymbol{\varepsilon}}_{t-1}\) 和 \(\boldsymbol{\varepsilon}_t\),使得两者能独立采样):

\[\begin{aligned} \boldsymbol{x}_t &= \alpha_t\boldsymbol{x}_{t-1} + \beta_t \boldsymbol{\varepsilon}_t = \alpha_t\left(\bar{\alpha}_{t-1}\boldsymbol{x}_0 + \bar{\beta}_{t-1}\bar{\boldsymbol{\varepsilon}}_{t-1}\right) + \beta_t \boldsymbol{\varepsilon}_t \\ &= \bar{\alpha}_t\boldsymbol{x}_0 + \alpha_t\bar{\beta}_{t-1}\bar{\boldsymbol{\varepsilon}}_{t-1} + \beta_t \boldsymbol{\varepsilon}_t \end{aligned} \]

丢进去,并把权重 \(\frac{\beta_t^2}{\alpha_t^2}\) 扔掉,得到损失函数:

\[\left\Vert \boldsymbol{\varepsilon}_t - \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\bar{\alpha}_t\boldsymbol{x}_0 + \alpha_t\bar{\beta}_{t-1}\bar{\boldsymbol{\varepsilon}}_{t-1} + \beta_t \boldsymbol{\varepsilon}_t, t)\right\Vert^2 \tag{3} \]

观察损失函数,包含了4个需要采样的随机变量:

  • 从所有训练样本中采样一个 \(x0\)
  • 从正态分布 \(\mathcal{N}(\boldsymbol{0}, \boldsymbol{I})\) 中采样 \(\bar{\boldsymbol{\varepsilon}}_{t-1}, \boldsymbol{\varepsilon}_t\)(分别独立采样)
  • 从 \(1∼T\) 中采样一个 \(t\)

要采样的随机变量越多,就越难对损失函数做准确的估计,反过来说就是每次对损失函数进行估计的波动(方差)过大了。不过我们可以通过一个积分技巧来将 \(\bar{\boldsymbol{\varepsilon}}_{t-1}, \boldsymbol{\varepsilon}_t\) 合并成单个正态随机变量,从而缓解一下方差大的问题

注意到有一部分 \(\alpha_t\bar{\beta}_{t-1}\bar{\boldsymbol{\varepsilon}}_{t-1} + \beta_t \boldsymbol{\varepsilon}_t\),由于两个变量独立,系数相加平方再开方,实际上就等于 \(\bar{\beta}_t\boldsymbol{\varepsilon}|\boldsymbol{\varepsilon}\sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})\)
再考虑两者系数互换稍稍变号 \(\beta_t \bar{\boldsymbol{\varepsilon}}_{t-1} - \alpha_t\bar{\beta}_{t-1} \boldsymbol{\varepsilon}_t\),同理它会等于 \(\bar{\beta}_t\boldsymbol{\omega}|\boldsymbol{\omega}\sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})\),并且可以验证 \(\mathbb{E}[\boldsymbol{\varepsilon}\boldsymbol{\omega}^{\top}]=\boldsymbol{0}\),即两者独立;接下来,我们反过来将 \(\boldsymbol{\varepsilon}_t\) 用 \(\boldsymbol{\varepsilon},\boldsymbol{\omega}\) 重新表示出来:

\[\boldsymbol{\varepsilon}_t = \frac{(\beta_t \boldsymbol{\varepsilon} - \alpha_t\bar{\beta}_{t-1} \boldsymbol{\omega})\bar{\beta}_t}{\beta_t^2 + \alpha_t^2\bar{\beta}_{t-1}^2} = \frac{\beta_t \boldsymbol{\varepsilon} - \alpha_t\bar{\beta}_{t-1} \boldsymbol{\omega}}{\bar{\beta}_t} \]

代回 \((3)\),有:

\[\begin{aligned} \text{原式} &= \mathbb{E}_{\bar{\boldsymbol{\varepsilon}}_{t-1}, \boldsymbol{\varepsilon}_t\sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})}\left[\left\Vert \boldsymbol{\varepsilon}_t - \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\bar{\alpha}_t\boldsymbol{x}_0 + \alpha_t\bar{\beta}_{t-1}\bar{\boldsymbol{\varepsilon}}_{t-1} + \beta_t \boldsymbol{\varepsilon}_t, t)\right\Vert^2\right] \\ &= \mathbb{E}_{\boldsymbol{\omega}, \boldsymbol{\varepsilon}\sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})}\left[\left\Vert \frac{\beta_t \boldsymbol{\varepsilon} - \alpha_t\bar{\beta}_{t-1} \boldsymbol{\omega}}{\bar{\beta}_t} - \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\bar{\alpha}_t\boldsymbol{x}_0 + \bar{\beta}_t\boldsymbol{\varepsilon}, t)\right\Vert^2\right] \\ &= \frac{\beta_t^2}{\bar{\beta}_t^2}\mathbb{E}_{\boldsymbol{\varepsilon}\sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})}\left[\left\Vert\boldsymbol{\varepsilon} - \frac{\bar{\beta}_t}{\beta_t}\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\bar{\alpha}_t\boldsymbol{x}_0 + \bar{\beta}_t\boldsymbol{\varepsilon}, t)\right\Vert^2\right]+\text{常数} & _\text{; 现在损失函数关于 $\boldsymbol{\omega}$ 只是二次的,可以展开将它的期望直接算出来} \\ \end{aligned} \]

最后的代价函数就是:

\[\left\Vert\boldsymbol{\varepsilon} - \frac{\bar{\beta}_t}{\beta_t}\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\bar{\alpha}_t\boldsymbol{x}_0 + \bar{\beta}_t\boldsymbol{\varepsilon}, t)\right\Vert^2 \tag{4} \]

生成

训练完之后,我们就可以从一个随机噪声 \(\boldsymbol{x}_T\sim\mathcal{N}(\boldsymbol{0}, \boldsymbol{I})\) 出发执行 \(T\) 步来进行生成:

\[\boldsymbol{x}_{t-1} = \frac{1}{\alpha_t}\left(\boldsymbol{x}_t - \beta_t \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\boldsymbol{x}_t, t)\right) \]

如果要进行 Random Sample,那么需要补上噪声项

\[\boldsymbol{x}_{t-1} = \frac{1}{\alpha_t}\left(\boldsymbol{x}_t - \beta_t \boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\boldsymbol{x}_t, t)\right) + \sigma_t \boldsymbol{z},\quad \boldsymbol{z}\sim\mathcal{N}(\boldsymbol{0}, \boldsymbol{I}) \]

一般来说,我们可以让 \(\sigma_t=\beta_t\),即正向和反向的方差保持同步。

关于超参数

在 DDPM 中,\(T=1000\),且对于 \(\alpha_t\) 的选择,论文中使用的是 \(\alpha_t = \sqrt{1 - \frac{0.02t}{T}}\)
一方面感性理解,欧氏距离用于度量图片有点粗糙了,除非两个图片很相近时效果会比较好;另一方面,选择单调递减的 \(\alpha_t\),即当 \(t\) 越大,相邻图片差距越大:\(t\) 比较小时我们尽量令 \(\alpha_t\) 大一点,以便更适合用欧氏距离;当 \(t\) 比较大时,对于更接近噪声的东西我们可以适当增大 \(\boldsymbol{x}_{t-1}\) 与 \(\boldsymbol{x}_t\) 的差距,对应的 \(\alpha_t\) 就会小一点。
上式代入 \(T=1000\) 有 \(\bar{\alpha}_T\approx e^{-5}\),符合我们之前提到的 \(\bar{\alpha}_t\approx 0\) 的假设

最后回收一点,注意到模型 \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\bar{\alpha}_t\boldsymbol{x}_0 + \bar{\beta}_t\boldsymbol{\varepsilon}, t)\) 里带有参数 \(t\),这是因为原则上不同的 \(t\) 处理的是不同层次的对象,所以应该用不同的重构模型,即理论上应该有 \(T\) 个不同的重构模型才对,而我们共享了所有重构模型的参数,所以要将 \(t\) 作为条件传入

未完

标签:Diffusion,varepsilon,bar,Models,boldsymbol,Introduction,beta,mathbf,alpha
From: https://www.cnblogs.com/zhyh/p/17025715.html

相关文章