首页 > 其他分享 >动手学强化学习(八.2):double-DQN

动手学强化学习(八.2):double-DQN

时间:2024-03-05 17:11:21浏览次数:16  
标签:dim double self torch list 动手 state action DQN

一、代码

import random
import gym
import numpy as np
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
import rl_utils
from tqdm import tqdm

class Qnet(torch.nn.Module):
    ''' 只有一层隐藏层的Q网络 '''
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(Qnet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, action_dim)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return self.fc2(x)

class DQN:
    ''' DQN算法,包括Double DQN '''
    def __init__(self,
                 state_dim,
                 hidden_dim,
                 action_dim,
                 learning_rate,
                 gamma,
                 epsilon,
                 target_update,
                 device,
                 dqn_type='VanillaDQN'):
        self.action_dim = action_dim
        self.q_net = Qnet(state_dim, hidden_dim, self.action_dim).to(device)
        self.target_q_net = Qnet(state_dim, hidden_dim,
                                 self.action_dim).to(device)
        self.optimizer = torch.optim.Adam(self.q_net.parameters(),
                                          lr=learning_rate)
        self.gamma = gamma
        self.epsilon = epsilon
        self.target_update = target_update
        self.count = 0
        self.dqn_type = dqn_type
        self.device = device

    def take_action(self, state):
        if np.random.random() < self.epsilon:
            action = np.random.randint(self.action_dim)
        else:
            state = torch.tensor([state], dtype=torch.float).to(self.device)
            action = self.q_net(state).argmax().item()
        return action

    def max_q_value(self, state):
        state = torch.tensor([state], dtype=torch.float).to(self.device)
        return self.q_net(state).max().item()

    def update(self, transition_dict):
        states = torch.tensor(transition_dict['states'],
                              dtype=torch.float).to(self.device)
        actions = torch.tensor(transition_dict['actions']).view(-1, 1).to(
            self.device)
        rewards = torch.tensor(transition_dict['rewards'],
                               dtype=torch.float).view(-1, 1).to(self.device)
        next_states = torch.tensor(transition_dict['next_states'],
                                   dtype=torch.float).to(self.device)
        dones = torch.tensor(transition_dict['dones'],
                             dtype=torch.float).view(-1, 1).to(self.device)

        q_values = self.q_net(states).gather(1, actions)  # Q值
        # 下个状态的最大Q值
        if self.dqn_type == 'DoubleDQN': # DQN与Double DQN的区别
            max_action = self.q_net(next_states).max(1)[1].view(-1, 1)
            a = self.target_q_net(next_states)
            max_next_q_values = a.gather(1, max_action)
        else: # DQN的情况
            max_next_q_values = self.target_q_net(next_states).max(1)[0].view(-1, 1)
        q_targets = rewards + self.gamma * max_next_q_values * (1 - dones)  # TD误差目标
        dqn_loss = torch.mean(F.mse_loss(q_values, q_targets))  # 均方误差损失函数
        self.optimizer.zero_grad()  # PyTorch中默认梯度会累积,这里需要显式将梯度置为0
        dqn_loss.backward()  # 反向传播更新参数
        self.optimizer.step()

        if self.count % self.target_update == 0:
            self.target_q_net.load_state_dict(
                self.q_net.state_dict())  # 更新目标网络
        self.count += 1

def train_DQN(agent, env, num_episodes, replay_buffer, minimal_size,
              batch_size):
    return_list = []
    max_q_value_list = []
    max_q_value = 0
    for i in range(10):
        with tqdm(total=int(num_episodes / 10),
                  desc='Iteration %d' % i) as pbar:
            for i_episode in range(int(num_episodes / 10)):
                episode_return = 0
                state = env.reset()[0]
                done = False
                while not done:
                    action = agent.take_action(state)
                    max_q_value = agent.max_q_value(
                        state) * 0.005 + max_q_value * 0.995  # 平滑处理
                    max_q_value_list.append(max_q_value)  # 保存每个状态的最大Q值
                    action_continuous = dis_to_con(action, env,
                                                   agent.action_dim)
                    next_state, reward, done, _,_ = env.step([action_continuous])
                    replay_buffer.add(state, action, reward, next_state, done)
                    state = next_state
                    episode_return += reward
                    if replay_buffer.size() > minimal_size:
                        b_s, b_a, b_r, b_ns, b_d = replay_buffer.sample(
                            batch_size)
                        transition_dict = {
                            'states': b_s,
                            'actions': b_a,
                            'next_states': b_ns,
                            'rewards': b_r,
                            'dones': b_d
                        }
                        agent.update(transition_dict)
                return_list.append(episode_return)
                if (i_episode + 1) % 10 == 0:
                    pbar.set_postfix({
                        'episode':
                        '%d' % (num_episodes / 10 * i + i_episode + 1),
                        'return':
                        '%.3f' % np.mean(return_list[-10:])
                    })
                pbar.update(1)
    return return_list, max_q_value_list

def run_DoubleDQN():
    random.seed(0)
    np.random.seed(0)
    torch.manual_seed(0)
    replay_buffer = rl_utils.ReplayBuffer(buffer_size)
    agent = DQN(state_dim, hidden_dim, action_dim, lr, gamma, epsilon,
                target_update, device, 'DoubleDQN')
    return_list, max_q_value_list = train_DQN(agent, env, num_episodes,
                                              replay_buffer, minimal_size,
                                              batch_size)

    episodes_list = list(range(len(return_list)))
    mv_return = rl_utils.moving_average(return_list, 5)
    plt.plot(episodes_list, mv_return)
    plt.xlabel('Episodes')
    plt.ylabel('Returns')
    plt.title('Double DQN on {}'.format(env_name))
    plt.show()

    frames_list = list(range(len(max_q_value_list)))
    plt.plot(frames_list, max_q_value_list)
    plt.axhline(0, c='orange', ls='--')
    plt.axhline(10, c='red', ls='--')
    plt.xlabel('Frames')
    plt.ylabel('Q value')
    plt.title('Double DQN on {}'.format(env_name))
    plt.show()

def run_dqn():
    random.seed(0)
    np.random.seed(0)
    torch.manual_seed(0)
    replay_buffer = rl_utils.ReplayBuffer(buffer_size)
    agent = DQN(state_dim, hidden_dim, action_dim, lr, gamma, epsilon,
                target_update, device)
    return_list, max_q_value_list = train_DQN(agent, env, num_episodes,
                                              replay_buffer, minimal_size,
                                              batch_size)

    episodes_list = list(range(len(return_list)))
    mv_return = rl_utils.moving_average(return_list, 5)
    plt.plot(episodes_list, mv_return)
    plt.xlabel('Episodes')
    plt.ylabel('Returns')
    plt.title('DQN on {}'.format(env_name))
    plt.show()

    frames_list = list(range(len(max_q_value_list)))
    plt.plot(frames_list, max_q_value_list)
    plt.axhline(0, c='orange', ls='--')
    plt.axhline(10, c='red', ls='--')
    plt.xlabel('Frames')
    plt.ylabel('Q value')
    plt.title('DQN on {}'.format(env_name))
    plt.show()

if __name__ == '__main__':
    lr = 1e-2
    num_episodes = 200
    hidden_dim = 128
    gamma = 0.98
    epsilon = 0.01
    target_update = 50
    buffer_size = 5000
    minimal_size = 1000
    batch_size = 64
    device = torch.device("cuda") if torch.cuda.is_available() else torch.device(
        "cpu")

    env_name = 'Pendulum-v1'
    env = gym.make(env_name)
    state_dim = env.observation_space.shape[0]
    action_dim = 11  # 将连续动作分成11个离散动作
    def dis_to_con(discrete_action, env, action_dim):  # 离散动作转回连续的函数
        action_lowbound = env.action_space.low[0]  # 连续动作的最小值
        action_upbound = env.action_space.high[0]  # 连续动作的最大值
        return action_lowbound + (discrete_action /
                                  (action_dim - 1)) * (action_upbound -
                                                       action_lowbound)

    run_DoubleDQN()

 

标签:dim,double,self,torch,list,动手,state,action,DQN
From: https://www.cnblogs.com/zhangxianrong/p/18054453

相关文章

  • 动手学强化学习(七.1):DQN 算法代码
    一、代码如下:importrandomimportgymimportnumpyasnpimportcollectionsfromtqdmimporttqdmimporttorchimporttorch.nn.functionalasFimportmatplotlib.pyplotaspltimportrl_utilsclassReplayBuffer:'''经验回放池'''......
  • 动手学强化学习(七):DQN 算法
    第7章DQN算法7.1简介在第5章讲解的Q-learning算法中,我们以矩阵的方式建立了一张存储每个状态下所有动作\(Q\)值的表格。表格中的每一个动作价值\(Q(s,a)\)表示在状态\(s\)下选择动作\(a\)然后继续遵循某一策略预期能够得到的期望回报。然而,这种用表格存储动作价值的做......
  • 动手学强化学习(六):Dyna-Q
    第6章Dyna-Q算法6.1简介在强化学习中,“模型”通常指与智能体交互的环境模型,即对环境的状态转移概率和奖励函数进行建模。根据是否具有环境模型,强化学习算法分为两种:基于模型的强化学习(model-basedreinforcementlearning)和无模型的强化学习(model-freereinforcementlearn......
  • 动手学强化学习(五):时序差分算法代码
    一、单步sarsaimportmatplotlib.pyplotaspltimportnumpyasnpfromtqdmimporttqdm#tqdm是显示循环进度条的库classCliffWalkingEnv:def__init__(self,ncol,nrow):self.nrow=nrow#4self.ncol=ncol#12self.x=0#记录......
  • 动手学强化学习(五):时序差分算法
    第5章时序差分算法5.1简介第4章介绍的动态规划算法要求马尔可夫决策过程是已知的,即要求与智能体交互的环境是完全已知的(例如迷宫或者给定规则的网格世界)。在此条件下,智能体其实并不需要和环境真正交互来采样数据,直接用动态规划算法就可以解出最优价值或策略。这就好比对于......
  • 动手学强化学习(五):值迭代与策略迭代代码
    一、策略迭代importcopyclassCliffWalkingEnv:"""悬崖漫步环境"""def__init__(self,ncol=12,nrow=4):self.ncol=ncol#定义网格世界的列self.nrow=nrow#定义网格世界的行#转移矩阵P[state][action]=[(p,next_state,......
  • 动手学强化学习(四):动态规划算法
    第4章动态规划算法4.1简介动态规划(dynamicprogramming)是程序设计算法中非常重要的内容,能够高效解决一些经典问题,例如背包问题和最短路径规划。动态规划的基本思想是将待求解问题分解成若干个子问题,先求解子问题,然后从这些子问题的解得到目标问题的解。动态规划会保存已解决......
  • 动手学强化学习(二):BAM代码
    一、greedyimportnumpyasnpimportmatplotlib.pyplotaspltclassBernoulliBandit:"""伯努利多臂老胡机,输入K表示拉杆个数"""def__init__(self,K):self.probs=np.random.uniform(size=K)#随机生成K个0~1的数,作为拉动每根拉杆的获奖......
  • 动手学强化学习(三):马尔可夫决策过程
    转载自:https://hrl.boyuai.com/chapter/1/马尔可夫决策过程3.1简介马尔可夫决策过程(Markovdecisionprocess,MDP)是强化学习的重要概念。要学好强化学习,我们首先要掌握马尔可夫决策过程的基础知识。前两章所说的强化学习中的环境一般就是一个马尔可夫决策过程。与多臂老胡机问题......
  • 动手学强化学习(一):初探强化学习
    1.1简介亲爱的读者,欢迎来到强化学习的世界。初探强化学习,你是否充满了好奇和期待呢?我们想说,首先感谢你的选择,学习本书不仅能够帮助你理解强化学习的算法原理,提高代码实践能力,更能让你了解自己是否喜欢决策智能这个方向,从而更好地决策未来是否从事人工智能方面的研究和实践工作。......