首页 > 其他分享 >深度学习框架theano下的batch_norm实现代码——强化学习框架rllab

深度学习框架theano下的batch_norm实现代码——强化学习框架rllab

时间:2024-02-18 10:15:43浏览次数:20  
标签:layer 框架 self axes batch rllab input mean

深度学习框架theano下的batch_norm实现代码——强化学习框架rllab



# encoding: utf-8

import lasagne.layers as L
import lasagne
import theano
import theano.tensor as TT


class ParamLayer(L.Layer):

    def __init__(self, incoming, num_units, param=lasagne.init.Constant(0.),
                 trainable=True, **kwargs):
        super(ParamLayer, self).__init__(incoming, **kwargs)
        self.num_units = num_units
        self.param = self.add_param(
            param,
            (num_units,),
            name="param",
            trainable=trainable
        )

    def get_output_shape_for(self, input_shape):
        return input_shape[:-1] + (self.num_units,)

    def get_output_for(self, input, **kwargs):
        ndim = input.ndim
        reshaped_param = TT.reshape(self.param, (1,) * (ndim - 1) + (self.num_units,))
        tile_arg = TT.concatenate([input.shape[:-1], [1]])
        tiled = TT.tile(reshaped_param, tile_arg, ndim=ndim)
        return tiled


class OpLayer(L.MergeLayer):
    def __init__(self, incoming, op,
                 shape_op=lambda x: x, extras=None, **kwargs):
        if extras is None:
            extras = []
        incomings = [incoming] + extras
        super(OpLayer, self).__init__(incomings, **kwargs)
        self.op = op
        self.shape_op = shape_op
        self.incomings = incomings

    def get_output_shape_for(self, input_shapes):
        return self.shape_op(*input_shapes)

    def get_output_for(self, inputs, **kwargs):
        return self.op(*inputs)


class BatchNormLayer(L.Layer):
    """
    lasagne.layers.BatchNormLayer(incoming, axes='auto', epsilon=1e-4,
    alpha=0.1, mode='low_mem',
    beta=lasagne.init.Constant(0), gamma=lasagne.init.Constant(1),
    mean=lasagne.init.Constant(0), std=lasagne.init.Constant(1), **kwargs)

    Batch Normalization

    This layer implements batch normalization of its inputs, following [1]_:

    .. math::
        y = \\frac{x - \\mu}{\\sqrt{\\sigma^2 + \\epsilon}} \\gamma + \\beta

    That is, the input is normalized to zero mean and unit variance, and then
    linearly transformed. The crucial part is that the mean and variance are
    computed across the batch dimension, i.e., over examples, not per example.

    During training, :math:`\\mu` and :math:`\\sigma^2` are defined to be the
    mean and variance of the current input mini-batch :math:`x`, and during
    testing, they are replaced with average statistics over the training
    data. Consequently, this layer has four stored parameters: :math:`\\beta`,
    :math:`\\gamma`, and the averages :math:`\\mu` and :math:`\\sigma^2`
    (nota bene: instead of :math:`\\sigma^2`, the layer actually stores
    :math:`1 / \\sqrt{\\sigma^2 + \\epsilon}`, for compatibility to cuDNN).
    By default, this layer learns the average statistics as exponential moving
    averages computed during training, so it can be plugged into an existing
    network without any changes of the training procedure (see Notes).

    Parameters
    ----------
    incoming : a :class:`Layer` instance or a tuple
        The layer feeding into this layer, or the expected input shape
    axes : 'auto', int or tuple of int
        The axis or axes to normalize over. If ``'auto'`` (the default),
        normalize over all axes except for the second: this will normalize over
        the minibatch dimension for dense layers, and additionally over all
        spatial dimensions for convolutional layers.
    epsilon : scalar
        Small constant :math:`\\epsilon` added to the variance before taking
        the square root and dividing by it, to avoid numerical problems
    alpha : scalar
        Coefficient for the exponential moving average of batch-wise means and
        standard deviations computed during training; the closer to one, the
        more it will depend on the last batches seen
    beta : Theano shared variable, expression, numpy array, callable or None
        Initial value, expression or initializer for :math:`\\beta`. Must match
        the incoming shape, skipping all axes in `axes`. Set to ``None`` to fix
        it to 0.0 instead of learning it.
        See :func:`lasagne.utils.create_param` for more information.
    gamma : Theano shared variable, expression, numpy array, callable or None
        Initial value, expression or initializer for :math:`\\gamma`. Must
        match the incoming shape, skipping all axes in `axes`. Set to ``None``
        to fix it to 1.0 instead of learning it.
        See :func:`lasagne.utils.create_param` for more information.
    mean : Theano shared variable, expression, numpy array, or callable
        Initial value, expression or initializer for :math:`\\mu`. Must match
        the incoming shape, skipping all axes in `axes`.
        See :func:`lasagne.utils.create_param` for more information.
    std : Theano shared variable, expression, numpy array, or callable
        Initial value, expression or initializer for :math:`1 / \\sqrt{
        \\sigma^2 + \\epsilon}`. Must match the incoming shape, skipping all
        axes in `axes`.
        See :func:`lasagne.utils.create_param` for more information.
    **kwargs
        Any additional keyword arguments are passed to the :class:`Layer`
        superclass.

    Notes
    -----
    This layer should be inserted between a linear transformation (such as a
    :class:`DenseLayer`, or :class:`Conv2DLayer`) and its nonlinearity. The
    convenience function :func:`batch_norm` modifies an existing layer to
    insert batch normalization in front of its nonlinearity.

    The behavior can be controlled by passing keyword arguments to
    :func:`lasagne.layers.get_output()` when building the output expression
    of any network containing this layer.

    During training, [1]_ normalize each input mini-batch by its statistics
    and update an exponential moving average of the statistics to be used for
    validation. This can be achieved by passing ``deterministic=False``.
    For validation, [1]_ normalize each input mini-batch by the stored
    statistics. This can be achieved by passing ``deterministic=True``.

    For more fine-grained control, ``batch_norm_update_averages`` can be passed
    to update the exponential moving averages (``True``) or not (``False``),
    and ``batch_norm_use_averages`` can be passed to use the exponential moving
    averages for normalization (``True``) or normalize each mini-batch by its
    own statistics (``False``). These settings override ``deterministic``.

    Note that for testing a model after training, [1]_ replace the stored
    exponential moving average statistics by fixing all network weights and
    re-computing average statistics over the training data in a layerwise
    fashion. This is not part of the layer implementation.

    In case you set `axes` to not include the batch dimension (the first axis,
    usually), normalization is done per example, not across examples. This does
    not require any averages, so you can pass ``batch_norm_update_averages``
    and ``batch_norm_use_averages`` as ``False`` in this case.

    See also
    --------
    batch_norm : Convenience function to apply batch normalization to a layer

    References
    ----------
    .. [1] Ioffe, Sergey and Szegedy, Christian (2015):
           Batch Normalization: Accelerating Deep Network Training by Reducing
           Internal Covariate Shift. http://arxiv.org/abs/1502.03167.
    """
    def __init__(self, incoming, axes='auto', epsilon=1e-4, alpha=0.1,
                 mode='low_mem', beta=lasagne.init.Constant(0), gamma=lasagne.init.Constant(1),
                 mean=lasagne.init.Constant(0), std=lasagne.init.Constant(1), **kwargs):
        super(BatchNormLayer, self).__init__(incoming, **kwargs)

        if axes == 'auto':
            # default: normalize over all but the second axis
            axes = (0,) + tuple(range(2, len(self.input_shape)))
        elif isinstance(axes, int):
            axes = (axes,)
        self.axes = axes

        self.epsilon = epsilon
        self.alpha = alpha
        self.mode = mode

        # create parameters, ignoring all dimensions in axes
        shape = [size for axis, size in enumerate(self.input_shape)
                 if axis not in self.axes]
        if any(size is None for size in shape):
            raise ValueError("BatchNormLayer needs specified input sizes for "
                             "all axes not normalized over.")
        if beta is None:
            self.beta = None
        else:
            self.beta = self.add_param(beta, shape, 'beta',
                                       trainable=True, regularizable=False)
        if gamma is None:
            self.gamma = None
        else:
            self.gamma = self.add_param(gamma, shape, 'gamma',
                                        trainable=True, regularizable=False)
        self.mean = self.add_param(mean, shape, 'mean',
                                   trainable=False, regularizable=False)
        self.std = self.add_param(std, shape, 'std',
                                      trainable=False, regularizable=False)

    def get_output_for(self, input, deterministic=False, **kwargs):
        input_mean = input.mean(self.axes)
        input_std = TT.sqrt(input.var(self.axes) + self.epsilon)

        # Decide whether to use the stored averages or mini-batch statistics
        use_averages = kwargs.get('batch_norm_use_averages',
                                  deterministic)
        if use_averages:
            mean = self.mean
            std = self.std
        else:
            mean = input_mean
            std = input_std

        # Decide whether to update the stored averages
        update_averages = kwargs.get('batch_norm_update_averages',
                                     not deterministic)
        if update_averages:
            # Trick: To update the stored statistics, we create memory-aliased
            # clones of the stored statistics:
            running_mean = theano.clone(self.mean, share_inputs=False)
            running_std = theano.clone(self.std, share_inputs=False)
            # set a default update for them:
            running_mean.default_update = ((1 - self.alpha) * running_mean +
                                           self.alpha * input_mean)
            running_std.default_update = ((1 - self.alpha) *
                                              running_std +
                                              self.alpha * input_std)
            # and make sure they end up in the graph without participating in
            # the computation (this way their default_update will be collected
            # and applied, but the computation will be optimized away):
            mean += 0 * running_mean
            std += 0 * running_std

        # prepare dimshuffle pattern inserting broadcastable axes as needed
        param_axes = iter(list(range(input.ndim - len(self.axes))))
        pattern = ['x' if input_axis in self.axes
                   else next(param_axes)
                   for input_axis in range(input.ndim)]

        # apply dimshuffle pattern to all parameters
        beta = 0 if self.beta is None else self.beta.dimshuffle(pattern)
        gamma = 1 if self.gamma is None else self.gamma.dimshuffle(pattern)
        mean = mean.dimshuffle(pattern)
        std = std.dimshuffle(pattern)

        # normalize
        normalized = (input - mean) * (gamma * TT.inv(std)) + beta
        return normalized


def batch_norm(layer, **kwargs):
    """
    Apply batch normalization to an existing layer. This is a convenience
    function modifying an existing layer to include batch normalization: It
    will steal the layer's nonlinearity if there is one (effectively
    introducing the normalization right before the nonlinearity), remove
    the layer's bias if there is one (because it would be redundant), and add
    a :class:`BatchNormLayer` and :class:`NonlinearityLayer` on top.

    Parameters
    ----------
    layer : A :class:`Layer` instance
        The layer to apply the normalization to; note that it will be
        irreversibly modified as specified above
    **kwargs
        Any additional keyword arguments are passed on to the
        :class:`BatchNormLayer` constructor.

    Returns
    -------
    BatchNormLayer or NonlinearityLayer instance
        A batch normalization layer stacked on the given modified `layer`, or
        a nonlinearity layer stacked on top of both if `layer` was nonlinear.

    Examples
    --------
    Just wrap any layer into a :func:`batch_norm` call on creating it:

    >>> from lasagne.layers import InputLayer, DenseLayer, batch_norm
    >>> from lasagne.nonlinearities import tanh
    >>> l1 = InputLayer((64, 768))
    >>> l2 = batch_norm(DenseLayer(l1, num_units=500, nonlinearity=tanh))

    This introduces batch normalization right before its nonlinearity:

    >>> from lasagne.layers import get_all_layers
    >>> [l.__class__.__name__ for l in get_all_layers(l2)]
    ['InputLayer', 'DenseLayer', 'BatchNormLayer', 'NonlinearityLayer']
    """
    nonlinearity = getattr(layer, 'nonlinearity', None)
    if nonlinearity is not None:
        layer.nonlinearity = lasagne.nonlinearities.identity
    if hasattr(layer, 'b') and layer.b is not None:
        del layer.params[layer.b]
        layer.b = None
    layer = BatchNormLayer(layer, **kwargs)
    if nonlinearity is not None:
        layer = L.NonlinearityLayer(layer, nonlinearity)
    return layer


标签:layer,框架,self,axes,batch,rllab,input,mean
From: https://www.cnblogs.com/devilmaycry812839668/p/18018815

相关文章

  • Unity框架设计系列:Unity 如何设计网络框架
    在Unity框架设计中与游戏服务器对接的网络框架也是非常重要的一个模块,本文給大家分享如何来基于Unity来设计一个网络框架,主要的讲解以下几个点:(1)TCP半包粘包,长连接与短连接,IO阻塞;(2)TcpSocket与UDPSocket的技术方案;(3)Unity的序列化与反序列化技术方案;(4)TCP的......
  • Vue.js前端框架之vite+vue+naive前端项目模板
    1.搭建Vue示例项目npmcreatevue cdvue-demo:进入项目目录npminstall:安装所有依赖npmrundev:启动项目2.了解Vue开发和工作原理2.1package.json类似于Python项目中pyproject.toml,包含了项目名称、版本、命令、依赖、设定2.2index.html浏览器访问到的HTML文件 ......
  • Unity资源管理系列:Unity 框架如何做好资源管理
    Unity资源管理需求分析作为架构师,在开始动手之前,先分析清楚需求,你才能设计出合理的方案,我们来分析一下Unity资源管理都有哪些需求,把需求想清楚了,设计是自然而然的事情。Unity资源管理主要需求:1:为开发与正式发布提供资源的加载/卸载;2:方便远程更新资源。3:带资源与不带资源......
  • 第 8章 Python 爬虫框架 Scrapy(下)
    第8章Python爬虫框架Scrapy(下)8.1Scrapy对接Selenium有一种反爬虫策略就是通过JS动态加载数据,应对这种策略的两种方法如下:分析Ajax请求,找出请求接口的相关规则,直接去请求接口获取数据。使用Selenium模拟浏览器渲染后抓取页面内容。8.1.1如何对接单独使用Sc......
  • (学习日记)一、Web框架-HTML标签-网页请求
    1.快速开发网站render_template是Flask框架的一个函数,用于渲染模板并生成动态的HTML文件app=Flask(name,template_floder(''路径''))构造一个Flask类赋给app,template_floder修改寻找模板的默认路径,默认是当前目录下的templates文件(没有则需要创建一个目录文件)fromflask......
  • Qt FluentUI 框架
    QtFluentUI框架项目地址:zhuzichu520/FluentUI(github.com)安装编译直接用QtCreator打开工程,编译运行example即可。根据CMakeLists.txt可以看出,编译好了后会自动在<Qt_SDK_DIR>/<Qt_Version>/<Your_compiler>/qml下面生成FluentUI文件夹,其中存放着FluentUI......
  • C++ Builder使用FMX多平台框架
    C++Builder使用FMX多平台框架C++Builder使用FMX多平台框架C++Builder使用FMX多平台框架......
  • 深度学习的始祖框架,grandfather级别的框架 —— Theano —— 示例代码学习(5)
    代码1:(求雅可比矩阵,jacobian矩阵求解)importtheanofromtheanoimporttensor#Creatingavectorx=tensor.dvector('x')#Creating'y'expressiony=(2*x**3)#ComputingderivativeOutput,updates=theano.scan(lambdai,y,x:tensor.g......
  • 深度学习的始祖框架,grandfather级别的框架 —— Theano —— 示例代码学习(4)
    实战(DenseLayer):下面用本篇的内容,写一个全连接层,实现前向传播、反向传播和参数更新。并用它实现一个3输入1输出的单层感知机,拟合函数y=x0+x1+x2。代码:importtheanoimporttheano.tensorasTTimportnumpyasnpimportpylabclassDataset():def__init__(......
  • 深度学习的始祖框架,grandfather级别的框架 —— Theano —— 示例代码学习(3)
    实战:写一个卷积层ConvolutionLayer二维卷积的前向操作:代码:importtheano.tensorasTTimporttheanoimportnumpyasnp#fromtheano.tensor.shared_randomstreamsimportRandomStreamsIdentity=lambdax:xReLU=lambdax:TT.maximum(x,0.0)Sigmoid=lambda......