首页 > 其他分享 >强化学习实践:Policy Gradient-Cart pole游戏展示

强化学习实践:Policy Gradient-Cart pole游戏展示

时间:2023-07-04 10:32:34浏览次数:41  
标签:reward Episode Gradient Sum Cart pole action Reward obs

摘要:智能体 agent 在环境 environment 中学习,根据环境的状态 state(或观测到的 observation),执行动作 action,并根据环境的反馈 reward(奖励)来指导更好的动作。

本文分享自华为云社区《强化学习从基础到进阶 - 案例与实践 [5.1]:Policy Gradient-Cart pole 游戏展示》,作者:汀丶 。

  • 强化学习(Reinforcement learning,简称 RL)是机器学习中的一个领域,区别与监督学习和无监督学习,强调如何基于环境而行动,以取得最大化的预期利益。
  • 基本操作步骤:智能体 agent 在环境 environment 中学习,根据环境的状态 state(或观测到的 observation),执行动作 action,并根据环境的反馈 reward(奖励)来指导更好的动作。

比如本项目的 Cart pole 小游戏中,agent 就是动图中的杆子,杆子有向左向右两种 action。

强化学习实践:Policy Gradient-Cart pole游戏展示_机器学习


1.Policy Gradient 简介

  • 在强化学习中,有两大类方法,一种基于值(Value-based),一种基于策略(Policy-based)
  • Value-based 的算法的典型代表为 Q-learning 和 SARSA,将 Q 函数优化到最优,再根据 Q 函数取最优策略。
  • Policy-based 的算法的典型代表为 Policy Gradient,直接优化策略函数。
  • 采用神经网络拟合策略函数,需计算策略梯度用于优化策略网络。
  • 优化的目标是在策略 π(s,a) 的期望回报:所有的轨迹获得的回报 R 与对应的轨迹发生概率 p 的加权和,当 N 足够大时,可通过采样 N 个 Episode 求平均的方式近似表达。

强化学习实践:Policy Gradient-Cart pole游戏展示_paddle_02

  • 优化目标对参数 θ 求导后得到策略梯度:

强化学习实践:Policy Gradient-Cart pole游戏展示_paddle_03

## 安装依赖
!pip install pygame
!pip install gym
!pip install atari_py
!pip install parl
import gym
import os
import random
import collections
import paddle
import paddle.nn as nn
import numpy as np
import paddle.nn.functional as F


2. 模型 Model

这里的模型可以根据自己的需求选择不同的神经网络组建。

PolicyGradient 用来定义前向 (Forward) 网络,可以自由的定制自己的网络结构。

class PolicyGradient(nn.Layer):
 def __init__(self, act_dim):
 super(PolicyGradient, self).__init__()
 act_dim = act_dim
        hid1_size = act_dim * 10
 self.linear1 = nn.Linear(in_features=4, out_features=hid1_size)
 self.linear2 = nn.Linear(in_features=hid1_size, out_features=act_dim)
 def forward(self, obs):
        out = self.linear1(obs)
        out = paddle.tanh(out)
        out = self.linear2(out)
        out = F.softmax(out)
 return out


3. 智能体 Agent 的学习函数

这里包括模型探索与模型训练两个部分

Agent 负责算法与环境的交互,在交互过程中把生成的数据提供给 Algorithm 来更新模型 (Model),数据的预处理流程也一般定义在这里。

def sample(obs, MODEL):
 global ACTION_DIM
 obs = np.expand_dims(obs, axis=0)
 obs = paddle.to_tensor(obs, dtype='float32')
    act = MODEL(obs)
 act_prob = np.squeeze(act, axis=0)
    act = np.random.choice(range(ACTION_DIM), p=act_prob.numpy())
 return act
def learn(obs, action, reward, MODEL):
 obs = np.array(obs).astype('float32')
 obs = paddle.to_tensor(obs)
 act_prob = MODEL(obs)
    action = paddle.to_tensor(action.astype('int32'))
 log_prob = paddle.sum(-1.0 * paddle.log(act_prob) * F.one_hot(action, act_prob.shape[1]), axis=1)
    reward = paddle.to_tensor(reward.astype('float32'))
    cost = log_prob * reward
    cost = paddle.sum(cost)
    opt = paddle.optimizer.Adam(learning_rate=LEARNING_RATE,
                                parameters=MODEL.parameters()) # 优化器(动态图)
 cost.backward()
 opt.step()
 opt.clear_grad()
 return cost.numpy()


4. 模型梯度更新算法

def run_train(env, MODEL):
 MODEL.train()
 obs_list, action_list, total_reward = [], [], []
 obs = env.reset()
 while True:
 # 获取随机动作和执行游戏
 obs_list.append(obs)
        action = sample(obs, MODEL) # 采样动作
 action_list.append(action)
 obs, reward, isOver, info = env.step(action)
 total_reward.append(reward)
 # 结束游戏
 if isOver:
 break
 return obs_list, action_list, total_reward
def evaluate(model, env, render=False):
 model.eval()
 eval_reward = []
 for i in range(5):
 obs = env.reset()
 episode_reward = 0
 while True:
 obs = np.expand_dims(obs, axis=0)
 obs = paddle.to_tensor(obs, dtype='float32')
            action = model(obs)
            action = np.argmax(action.numpy())
 obs, reward, done, _ = env.step(action)
 episode_reward += reward
 if render:
 env.render()
 if done:
 break
 eval_reward.append(episode_reward)
 return np.mean(eval_reward)


5. 训练函数与验证函数

设置超参数

LEARNING_RATE = 0.001 # 学习率大小
OBS_DIM = None
ACTION_DIM = None
# 根据一个episode的每个step的reward列表,计算每一个Step的Gt
def calc_reward_to_go(reward_list, gamma=1.0):
 for i in range(len(reward_list) - 2, -1, -1):
 # G_t = r_t + γ·r_t+1 + ... = r_t + γ·G_t+1
 reward_list[i] += gamma * reward_list[i + 1] # Gt
 return np.array(reward_list)
def main():
 global OBS_DIM
 global ACTION_DIM
 train_step_list = []
 train_reward_list = []
 evaluate_step_list = []
 evaluate_reward_list = []
 # 初始化游戏
    env = gym.make('CartPole-v0')
 # 图像输入形状和动作维度
 action_dim = env.action_space.n
 obs_dim = env.observation_space.shape[0]
    OBS_DIM = obs_dim
    ACTION_DIM = action_dim
 max_score = -int(1e4)
 # 创建存储执行游戏的内存
    MODEL = PolicyGradient(ACTION_DIM)
    TARGET_MODEL = PolicyGradient(ACTION_DIM)
 # 开始训练
 print("start training...")
 # 训练max_episode个回合,test部分不计算入episode数量
 for i in range(1000):
 obs_list, action_list, reward_list = run_train(env, MODEL)
 if i % 10 == 0:
 print("Episode {}, Reward Sum {}.".format(i, sum(reward_list)))
 batch_obs = np.array(obs_list)
 batch_action = np.array(action_list)
 batch_reward = calc_reward_to_go(reward_list)
        cost = learn(batch_obs, batch_action, batch_reward, MODEL)
 if (i + 1) % 100 == 0:
 total_reward = evaluate(MODEL, env, render=False) # render=True 查看渲染效果,需要在本地运行,AIStudio无法显示
 print("Test reward: {}".format(total_reward))
if __name__ == '__main__':
 main()

 

W0630 11:26:18.969960 322 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0630 11:26:18.974581 322 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
start training...
Episode 0, Reward Sum 37.0.
Episode 10, Reward Sum 27.0.
Episode 20, Reward Sum 32.0.
Episode 30, Reward Sum 20.0.
Episode 40, Reward Sum 18.0.
Episode 50, Reward Sum 38.0.
Episode 60, Reward Sum 52.0.
Episode 70, Reward Sum 19.0.
Episode 80, Reward Sum 27.0.
Episode 90, Reward Sum 13.0.
Test reward: 42.8
Episode 100, Reward Sum 28.0.
Episode 110, Reward Sum 44.0.
Episode 120, Reward Sum 30.0.
Episode 130, Reward Sum 28.0.
Episode 140, Reward Sum 27.0.
Episode 150, Reward Sum 47.0.
Episode 160, Reward Sum 55.0.
Episode 170, Reward Sum 26.0.
Episode 180, Reward Sum 47.0.
Episode 190, Reward Sum 17.0.
Test reward: 42.8
Episode 200, Reward Sum 23.0.
Episode 210, Reward Sum 19.0.
Episode 220, Reward Sum 15.0.
Episode 230, Reward Sum 59.0.
Episode 240, Reward Sum 59.0.
Episode 250, Reward Sum 32.0.
Episode 260, Reward Sum 58.0.
Episode 270, Reward Sum 18.0.
Episode 280, Reward Sum 24.0.
Episode 290, Reward Sum 64.0.
Test reward: 116.8
Episode 300, Reward Sum 54.0.
Episode 310, Reward Sum 28.0.
Episode 320, Reward Sum 44.0.
Episode 330, Reward Sum 18.0.
Episode 340, Reward Sum 89.0.
Episode 350, Reward Sum 26.0.
Episode 360, Reward Sum 57.0.
Episode 370, Reward Sum 54.0.
Episode 380, Reward Sum 105.0.
Episode 390, Reward Sum 56.0.
Test reward: 94.0
Episode 400, Reward Sum 70.0.
Episode 410, Reward Sum 35.0.
Episode 420, Reward Sum 45.0.
Episode 430, Reward Sum 117.0.
Episode 440, Reward Sum 50.0.
Episode 450, Reward Sum 35.0.
Episode 460, Reward Sum 41.0.
Episode 470, Reward Sum 43.0.
Episode 480, Reward Sum 75.0.
Episode 490, Reward Sum 37.0.
Test reward: 57.6
Episode 500, Reward Sum 40.0.
Episode 510, Reward Sum 85.0.
Episode 520, Reward Sum 86.0.
Episode 530, Reward Sum 30.0.
Episode 540, Reward Sum 68.0.
Episode 550, Reward Sum 25.0.
Episode 560, Reward Sum 82.0.
Episode 570, Reward Sum 54.0.
Episode 580, Reward Sum 53.0.
Episode 590, Reward Sum 58.0.
Test reward: 147.2
Episode 600, Reward Sum 24.0.
Episode 610, Reward Sum 78.0.
Episode 620, Reward Sum 62.0.
Episode 630, Reward Sum 58.0.
Episode 640, Reward Sum 50.0.
Episode 650, Reward Sum 67.0.
Episode 660, Reward Sum 68.0.
Episode 670, Reward Sum 51.0.
Episode 680, Reward Sum 36.0.
Episode 690, Reward Sum 69.0.
Test reward: 84.2
Episode 700, Reward Sum 34.0.
Episode 710, Reward Sum 59.0.
Episode 720, Reward Sum 56.0.
Episode 730, Reward Sum 72.0.
Episode 740, Reward Sum 28.0.
Episode 750, Reward Sum 35.0.
Episode 760, Reward Sum 54.0.
Episode 770, Reward Sum 61.0.
Episode 780, Reward Sum 32.0.
Episode 790, Reward Sum 147.0.
Test reward: 123.0
Episode 800, Reward Sum 129.0.
Episode 810, Reward Sum 65.0.
Episode 820, Reward Sum 73.0.
Episode 830, Reward Sum 54.0.
Episode 840, Reward Sum 60.0.
Episode 850, Reward Sum 71.0.
Episode 860, Reward Sum 54.0.
Episode 870, Reward Sum 74.0.
Episode 880, Reward Sum 34.0.
Episode 890, Reward Sum 55.0.
Test reward: 104.8
Episode 900, Reward Sum 41.0.
Episode 910, Reward Sum 111.0.
Episode 920, Reward Sum 33.0.
Episode 930, Reward Sum 49.0.
Episode 940, Reward Sum 62.0.
Episode 950, Reward Sum 114.0.
Episode 960, Reward Sum 52.0.
Episode 970, Reward Sum 64.0.
Episode 980, Reward Sum 94.0.
Episode 990, Reward Sum 90.0.
Test reward: 72.2

项目链接 fork 一下即可运行。

 

点击关注,第一时间了解华为云新鲜技术~

标签:reward,Episode,Gradient,Sum,Cart,pole,action,Reward,obs
From: https://blog.51cto.com/u_15214399/6618052

相关文章

  • 深度Q网络:DQN项目实战CartPole-v0
    摘要:相比于Qlearning,DQN本质上是为了适应更为复杂的环境,并且经过不断的改良迭代,到了NatureDQN(即VolodymyrMnih发表的Nature论文)这里才算是基本完善。本文分享自华为云社区《强化学习从基础到进阶-案例与实践[4.1]:深度Q网络-DQN项目实战CartPole-v0》,作者:汀丶。1、定义算法......
  • 强化学习从基础到进阶-案例与实践[4.1]:深度Q网络-DQN项目实战CartPole-v0
    强化学习从基础到进阶-案例与实践[4.1]:深度Q网络-DQN项目实战CartPole-v01、定义算法相比于Qlearning,DQN本质上是为了适应更为复杂的环境,并且经过不断的改良迭代,到了NatureDQN(即VolodymyrMnih发表的Nature论文)这里才算是基本完善。DQN主要改动的点有三个:使用深度神经网络替......
  • SPSS Modeler用K-means(K-均值)聚类、CHAID、CART决策树分析31省市土地利用情况和GDP数
    全文链接:http://tecdat.cn/?p=32840原文出处:拓端数据部落公众号随着经济的快速发展和城市化进程的不断推进,土地资源的利用和管理成为了一项极为重要的任务。而对于全国各省市而言,如何合理利用土地资源,通过科学的方法进行规划和管理,是提高土地利用效率的关键。本文旨在应用SPSS......
  • Vue / uniapp cart.js购物车
     constcart={namespaced:true,state:{//{"store_id":"","goods_id":"","goods_name":"","goods_price":"","goods_count":"","......
  • 可扩展机器学习——梯度下降(Gradient Descent)
    注:这是一份学习笔记,记录的是参考文献中的可扩展机器学习的一些内容,英文的PPT可见参考文献的链接。这个只是自己的学习笔记,对原来教程中的内容进行了梳理,有些图也是引用的原来的教程,若内容上有任何错误,希望与我联系,若内容有侵权,同样也希望告知,我会尽快删除。这部分本应该加上实验的......
  • CART——Classification And Regression Tree在python下的实现
    分类与回归树(CART——ClassificationAndRegressionTree))是一种非参数分类和回归方法,它通过构建二叉树达到预测目的。示例:1.样本数据集 2.运行结果-cart决策树的字典max_n_feats=3时tree_dict={house:{yes:agreen......
  • CSS实现兼容性的渐变背景(gradient)效果
    一、有点俗态的开场白要是两年前,实现“兼容性的渐变效果”这个说法估计不会被提出来的,那个时候,说起渐变背景,想到的多半是IE的渐变滤镜,其他浏览器尚未支持,但是,在对CSS3支持日趋完善的今天,实现兼容性的渐变背景效果已经完全成为可能,本文就将展示如何实现兼容性的渐变背景效果。在众......
  • Python进行多输出(多因变量)回归:集成学习梯度提升决策树GRADIENT BOOSTING,GBR回归训练
    原文链接: http://tecdat.cn/?p=25939最近我们被客户要求撰写关于多输出(多因变量)回归的研究报告,包括一些图形和统计输出。在之前的文章中,我们研究了许多使用多输出回归分析的方法。在本教程中,我们将学习如何使用梯度提升决策树GRADIENTBOOSTINGREGRESSOR拟合和预测多输出回归......
  • 关于电商解决方案里 Cart calculation 的数据库 Contention 问题
    在电商开发领域中,"ContentiononthedatabaseIOduetoupdateofthecart,cartentriesandpromotionresults"指的是由于购物车、购物车条目和促销结果的更新而导致的数据库输入/输出(IO)冲突。购物车是电商网站中重要的功能之一,它允许用户将所需的商品添加到购物车中并进......
  • Uncovering the Representation of Spiking Neural Networks Trained with Surrogate
    郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布! PublishedinTransactionsonMachineLearningResearch(04/2023)......