Deep_Networks

Deep FeedForward Networks (Multilayer Perceptrons)

Gradient-Based Learning

Learning Conditional Distribution With MLE

$$ J(\pmb \theta)= -\mathbb E_{\mathcal{X,Y}\sim p_{data}} log \ p_{model} (\pmb{y | x}) \\\ \\ When \ p_{model}(\pmb{y | x}) = \mathcal N(\pmb{y;f(x,\theta),I}) \\\ \\ J(\pmb \theta) = \frac{1}{2} \mathbb E_{x,y \sim p_{data}} ||\pmb y - f(\pmb{x;\theta})||^2 + C $$

Output Unit

  1. Linear Units for Gaussian Output Distributions

$$ p(\pmb{y | x})= \mathcal N(\pmb{y;\hat y,I}) $$

  1. Sigmoid Units for Bernoulli Distribution(二分类问题)

$$ \sigma = \frac{1}{1+e^{-x}} \\\ \\ \hat y = \sigma(\pmb{w^Th} +b), y \in \lbrace0,1\rbrace\\\ \\ z = \pmb{w^Th} +b ,称为logit\\\ \\ \begin{aligned} log \overset{\sim}P(y) &= yz \\ \overset{\sim} P(y) &= e^{yz} \\ P(y) &= \frac{e^{yz}}{\sum_{y’=0}^n e^{y’z}} \\ &= \sigma((2y-1)z) \end{aligned} \\\ \\ \begin{aligned} \therefore J(\theta) &= -logP(\pmb{y|x}) \\ &= -log\ \sigma((2y-1)z) \\ &= \xi((1-2y)z) \end{aligned} $$

  1. Softmax Units For Multinoulli Distribution
  • Softmax 定义

$$ Softmax(z_i) = \frac{e^{z_i}}{\sum_j e^{z_j}} \\\ \\ log \ Softmax(z_i) = z_i - log\sum_j z_j $$

  • Softmax 性质

$$ Softmax(\pmb z) = Softmax(\pmb z +c) \\\ \\ Softmax(\pmb z) = Softmax(\pmb z - max z_i) $$

Hidden Units

Rectified Linear Units

$$ h(x) = \begin{cases} x , x\ge 0 \\ 0, x\lt 0 \end{cases} $$

Generalization

Leaky ReLU

在ReLU中,当输入小于0时,输出为0,而Leaky ReLU则是当输入小于0时,输出为一个小的非零值而不是0,这有助于解决ReLU中的“死亡神经元”问题。

$$ f(x) = \begin{cases} x & x\ge 0 \\ \alpha x & x\lt 0, \alpha \lt 1 \end{cases} $$

Parametric ReLU

在Leaky ReLU的基础上引入可学习的参数,使得模型可以自适应地选择最适合的斜率。

$$ f(x) = \begin{cases} x & x\ge 0 \\ ax & x\lt 0 \end{cases} \\\ \\ a是可学习参数 $$

Exponential ReLU

在ReLU的基础上引入了一个指数函数,使得模型可以更快地收敛,并且对于负数输入也有一个非零输出。

$$ f(x) = \begin{cases} x & x\ge 0 \\ \alpha (e^x -1) & x\lt 0 \end{cases} $$

Randomized ReLU

在ReLU的基础上引入了随机性,即在训练过程中随机选择一个斜率,这有助于防止过拟合。

$$ f(x) = \begin{cases} x & x\ge 0 \\ \alpha (e^x -1) & x\lt 0 \end{cases} \\\ \\ \alpha是一个随机选择的常数 $$

Maxout

Maxout是一种更通用的激活函数,它可以选择多个输入中的最大值作为输出,这使得模型可以学习更复杂的非线性函数

$$ f(x) = \underset{i}{max}(w_i^T + b_i) $$

Architecture Design

Universal Approximation Properties and Depth

  1. Universal Approximation Theorem
  • 一个具有足够多的神经元和层数的神经网络可以近似地表示任何函数
  1. 其他定理
  • $d$个输入,$l$的深度,每个隐藏层$n$个单元,其Linear Regions个数

$$ O\begin{pmatrix} \begin{pmatrix} n \\ d \end{pmatrix}^{d(l-1)} n^d \end{pmatrix} \\\ \\ 对于k次过滤的maxout: O\Big(k^{(l-1)+d}\Big) $$

Back-Propagation & Differentiation Algorithms

Back-Propagation

通过将误差从输出层向输入层传播,计算每个神经元的误差贡献,并根据误差大小更新网络权重,以逐步优化网络的预测能力

Differentiation Algorithms

  1. 前向差分法 (Forward Difference Method):

$$\frac{df(x)}{dx} \approx \frac{f(x+h)-f(x)}{h}$$

  1. 后向差分法 (Backward Difference Method):

$$\frac{df(x)}{dx} \approx \frac{f(x)-f(x-h)}{h}$$

  1. 中心差分法 (Central Difference Method):

$$\frac{df(x)}{dx} \approx \frac{f(x+h)-f(x-h)}{2h}$$

  1. 自适应差分法 (Adaptive Difference Method):

$$\frac{df(x)}{dx} \approx \begin{cases} \frac{f(x+h)-f(x)}{h}, & \left|\frac{f(x+h)-f(x)}{h}-\frac{f(x)-f(x-h)}{h}\right| \leq \epsilon \ \frac{f(x+h)-f(x-h)}{2h}, & \text{otherwise} \end{cases}$$

其中,$f(x)$ 表示函数 $f$ 在 $x$ 处的取值,$h$ 表示差分的步长,$\epsilon$ 表示容许的误差。

Computational Graphs

  1. Recursively Applying the Chain Rule to Obtain Backprop

$$ u^{(i)} = f(\mathbb A^{(i)}) \\\ \\ \begin{aligned} &for \ i = 1,…,n_i\ do \\ & \ \ u^{(i)} \leftarrow x_i \\ &end \ for \\ & for \ i = n_i + 1 ,…,n \ do \\ & \ \ \mathbb A \leftarrow \lbrace u^{(j)} | j \in Pa(u^{(j)})\rbrace \\ & \ \ u^{(i)} \leftarrow f^{(i)} (\mathbb A^{(i)}) \\ & end \ for \end{aligned} $$

  1. Back-Propagation Computation in Fully-Connected MLP
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
'''
Require: Network depth, l
Require: W (i) (i), i ∈ {1, . . . , l}, the weight matrices of the model
Require: b , i ∈ {1, . . . , l}, the bias parameters of the model
Require: x, the input to process
Require: y, the target output
'''

import numpy as np

def BP_MLP(l,W,b,x,y_hat):
    int k
    h = []
    h.append(x)
    for k in range(1,k+1):
        a[k] = b[k] + W[k]*h[k-1]
        h.append(f(a[k])) // f假设为一个已经定义的函数
    y_hat = h[l]
    J = L(y,y_hat) + omega(theta)*Lambda

Latest Algorithm

  1. 自适应动量优化算法(Adaptive Moment Estimation,Adam)
1
2
3
import torch.optim as optim

optimizer = optim.Adam(model.parameters(), lr=0.001)
  1. 均方根传播算法(Root Mean Square Propagation,RMSprop)
1
2
3
import torch.optim as optim

optimizer = optim.RMSprop(model.parameters(), lr=0.001)
  1. 带动量的随机梯度下降算法(Stochastic Gradient Descent with Momentum,SGDM)
1
2
3
import torch.optim as optim

optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
  1. 自适应梯度算法(Adaptive Gradient Algorithm,AdaGrad)
1
2
3
import torch.optim as optim

optimizer = optim.Adagrad(model.parameters(), lr=0.001)
  1. 自适应梯度估计算法(Adaptive Gradient Estimation,AdaDelta)
1
2
3
import torch.optim as optim

optimizer = optim.Adadelta(model.parameters(), lr=0.001)

Regularization for Deep Learning

Parameter Norm Penalties

$$ Regularized \ Objective \ Function \\\ \\ \overset{\sim}{J} (\pmb{\theta;X,y}) = J (\pmb{\theta;X,y}) + \alpha \Omega(\pmb \theta) $$

  • $\alpha$: 超参数

  • $\Omega$: Parameter Norm

  • $J$: Standard Objective Function

$L^2$ Parameter Regularization

  1. 假设No Bias,则$\theta = \omega$

$$ \Omega(\omega) = \frac{1}{2}\omega^T \omega \\\ \\ \overset{\sim}{J} (\pmb{\theta;X,y}) = J (\pmb{\theta;X,y}) + \frac{1}{2}\alpha \pmb{\omega^T \omega}\\\ \\ $$

  1. $J(\theta)$的表达式

$$ \nabla_{\omega}\overset{\sim}{J} (\pmb{\theta;X,y}) = \nabla_{\omega} J (\pmb{\theta;X,y}) + \alpha \pmb \omega \\\ \\ \pmb{\omega} - \epsilon (\nabla_{\omega} J (\pmb{\theta;X,y}) + \alpha \pmb \omega ) \rightarrow \pmb \omega \\\ \\ (1-\epsilon \alpha) \pmb \omega - \epsilon \nabla_{\omega} J(\pmb{\omega;X,y}) \rightarrow \pmb \omega \\\ \\ \therefore \pmb \omega^* = min_{\omega} J(\pmb \omega) \\\ \\ \therefore \hat J (\pmb \theta) = J(\pmb \omega^) + \frac{1}{2} (\pmb{\omega -\omega^})^T H(\pmb{\omega - \omega^*}) (泰勒展开忽略第二项) $$

  1. Hessian matrix

$$ H(J(\pmb \theta)) = J的二阶导 \\\ \\ \because \frac{\partial f}{\partial x_i \partial x_j} = \frac{\partial f}{\partial x_j \partial x_i} \\\ \\ \therefore H 是\pmb{对称矩阵} $$

  1. 最小值

$$ \alpha \overset{\sim}{\pmb \omega} + \pmb{H(\overset{\sim}{\pmb \omega} - w^* )} = 0时取得\pmb{最小值} \\\ \\ \ \begin{aligned} \overset{\sim}{\pmb \omega} &= (\pmb{H} + \alpha \pmb I)^{-1} \pmb {H\omega^} \\ &= (Q\Lambda Q^T + \alpha I)^{-1} Q\Lambda Q^T \omega^ \\ &= Q(\Lambda + \alpha I)^{-1} \Lambda Q^T \omega^* \end{aligned} $$

  1. 以线性回归为例

$$ J = \pmb{(X\omega -y)^T(X\omega -y)} + \frac{1}{2}\alpha \omega^T\omega \\\ \\ \pmb \omega = (\pmb{X^TX} + \alpha \pmb I)^{-1} \pmb{X^T y}
$$

$L^1$ Regularization

  1. Regularization Function

$$ \Omega(\pmb \omega) = ||\pmb \omega||_1 = \sum_i |\omega_i| \\\ \\ \overset{\sim}{J} (\pmb{\theta;X,y}) = J (\pmb{\theta;X,y}) + \alpha ||\pmb \omega ||_1 \\\ \\ J(\pmb \omega) = J(\pmb \omega^) + \sum_i (\frac{1}{2}\pmb{H(\omega_i - \omega_i^) } + \alpha |\omega_i|) \ \ (泰勒展开) \\\ \\ \omega_i = sign(\omega^) max\lbrace |\omega^i| - \frac{\alpha}{H{i,i}},0\rbrace $$

Norm Penalties as Constrained Optimization

Licensed under CC BY-NC-SA 4.0
comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy