首页 > 其他分享 >pyro ExponentialLR 如何设置优化器 optimizer的学习率 pytorch 深度神经网络 bnn,

pyro ExponentialLR 如何设置优化器 optimizer的学习率 pytorch 深度神经网络 bnn,

时间:2024-09-03 19:54:56浏览次数:13  
标签:optimizer optim args pyro pytorch lr scheduler

 第一。pyro 不支持 “ReduceLROnPlateau” ,因为需要Loss作为输入数值,计算量大

pytorch的学习率调整 视频 看这个博主的视频

05-01-学习率调整策略_哔哩哔哩_bilibili

第二 ,svi 支持 scheduler注意点,

属于  pyro.optim.PyroOptim的有三个
AdagradRMSProp ClippedAdam DCTAdam,但是还是会报错,类似下面的错误

optimizer = pyro.optim.SGD
# 指数下降学习率
pyro_scheduler = pyro.optim.ExponentialLR({'optimizer': optimizer, 'optim_args': {'lr': learn_rate}, 'gamma': 0.1})
Traceback (most recent call last):
  File "/home/aistudio/bnn_pyro_fso_middle_2_16__256.py", line 441, in <module>
    svi = SVI(model, mean_field_guide, optimizer, loss=Trace_ELBO())
  File "/home/aistudio/external-libraries/pyro/infer/svi.py", line 72, in __init__
    raise ValueError(
ValueError: Optimizer should be an instance of pyro.optim.PyroOptim class.

正确的方法是 

optimizer = torch.optim.SGD
# 指数下降学习率
pyro_scheduler = pyro.optim.ExponentialLR({'optimizer': optimizer, 'optim_args': {'lr': learn_rate}, 'gamma': 0.1})


# 设置ReduceLROnPlateau调度器
# 在这个例子中,`'min'`模式表示当验证集上的损失停止下降时,学习率会降低。`factor=0.1`表示学习率会乘以0.1,`
# patience=10`表示如果验证集损失在连续10个epochs内没有改善,则降低学习率。
# scheduler = ReduceLROnPlateau(optimizer, 'min', factor=0.5, patience=20, verbose=True)

# svi = SVI(model, mean_field_guide, optimizer, loss=Trace_ELBO())
svi = SVI(model, mean_field_guide, pyro_scheduler, loss=Trace_ELBO())

ExponentialLR

ExponentialLR是指数型下降的学习率调节器,每一轮会将学习率乘以gamma,所以这里千万注意gamma不要设置的太小,不然几轮之后学习率就会降到0。

lr_scheduler = lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
# 按指数衰减调整学习率,调整公式:lr = lr*gamma**epoch

这是一个用于动态生成参数的调整学习率的包装器,用于 `torch.optim.lr_scheduler` 对象。

    :param scheduler_constructor: 一个 `torch.optim.lr_scheduler` 的类
    :param optim_args: 一个包含优化器学习参数的字典,或者是一个返回此类字典的可调用对象。必须包含键 'optimizer',其值为 PyTorch 优化器的值
    :param clip_args: 一个包含 `clip_norm` 和/或 `clip_value` 参数的字典,或者是一个返回此类字典的可调用对象。

    例子::

        optimizer = torch.optim.SGD
        scheduler = pyro.optim.ExponentialLR({'optimizer': optimizer, 'optim_args': {'lr': 0.01}, 'gamma': 0.1})
        svi = SVI(model, guide, scheduler, loss=TraceGraph_ELBO())
        for i in range(epochs):
            for minibatch in DataLoader(dataset, batch_size):
                svi.step(minibatch)
            scheduler.step()
 

"""
A wrapper for :class:`~torch.optim.lr_scheduler` objects that adjusts learning rates
for dynamically generated parameters.

:param scheduler_constructor: a :class:`~torch.optim.lr_scheduler`
:param optim_args: a dictionary of learning arguments for the optimizer or a callable that returns
    such dictionaries. must contain the key 'optimizer' with pytorch optimizer value
:param clip_args: a dictionary of clip_norm and/or clip_value args or a callable that returns
    such dictionaries.

Example::

    optimizer = torch.optim.SGD
    scheduler = pyro.optim.ExponentialLR({'optimizer': optimizer, 'optim_args': {'lr': 0.01}, 'gamma': 0.1})
    svi = SVI(model, guide, scheduler, loss=TraceGraph_ELBO())
    for i in range(epochs):
        for minibatch in DataLoader(dataset, batch_size):
            svi.step(minibatch)
        scheduler.step()

pyro.optim.lr_scheduler的源代码

# Copyright (c) 2017-2019 Uber Technologies, Inc.
# SPDX-License-Identifier: Apache-2.0

from typing import Any, Dict, Iterable, List, Optional, Union, ValuesView

from torch import Tensor

from pyro.optim.optim import PyroOptim


class PyroLRScheduler(PyroOptim):
    """
    A wrapper for :class:`~torch.optim.lr_scheduler` objects that adjusts learning rates
    for dynamically generated parameters.

    :param scheduler_constructor: a :class:`~torch.optim.lr_scheduler`
    :param optim_args: a dictionary of learning arguments for the optimizer or a callable that returns
        such dictionaries. must contain the key 'optimizer' with pytorch optimizer value
    :param clip_args: a dictionary of clip_norm and/or clip_value args or a callable that returns
        such dictionaries.

    Example::

        optimizer = torch.optim.SGD
        scheduler = pyro.optim.ExponentialLR({'optimizer': optimizer, 'optim_args': {'lr': 0.01}, 'gamma': 0.1})
        svi = SVI(model, guide, scheduler, loss=TraceGraph_ELBO())
        for i in range(epochs):
            for minibatch in DataLoader(dataset, batch_size):
                svi.step(minibatch)
            scheduler.step()
    """

    def __init__(
        self,
        scheduler_constructor,
        optim_args: Union[Dict],
        clip_args: Optional[Union[Dict]] = None,
    ):
        # pytorch scheduler
        self.pt_scheduler_constructor = scheduler_constructor
        # torch optimizer
        pt_optim_constructor = optim_args.pop("optimizer")
        # kwargs for the torch optimizer
        optim_kwargs = optim_args.pop("optim_args")
        self.kwargs = optim_args
        super().__init__(pt_optim_constructor, optim_kwargs, clip_args)

    def __call__(self, params: Union[List, ValuesView], *args, **kwargs) -> None:
        super().__call__(params, *args, **kwargs)

    def _get_optim(
        self, params: Union[Tensor, Iterable[Tensor], Iterable[Dict[Any, Any]]]
    ):
        optim = super()._get_optim(params)
        return self.pt_scheduler_constructor(optim, **self.kwargs)

    def step(self, *args, **kwargs) -> None:
        """
        Takes the same arguments as the PyTorch scheduler
        (e.g. optional ``loss`` for ``ReduceLROnPlateau``)
        """
        for scheduler in self.optim_objs.values():
            scheduler.step(*args, **kwargs)

标签:optimizer,optim,args,pyro,pytorch,lr,scheduler
From: https://blog.csdn.net/zhangfeng1133/article/details/141811253

相关文章