【强化学习】Policy Gradient 策略梯度算法求解CartPole倒立摆问题 + Pytorch代码实战


一、倒立摆问题介绍

Agent 必须在两个动作之间做出决定 - 向左或向右移动推车 - 以使连接到它的杆保持直立。
在这里插入图片描述

二、策略梯度算法简介

我们可以把采样到的数据代入下式中,把梯度算出来。也就是把每一个 s s s a a a 的对拿进来,计算在某一个状态下采取 某一个动作的对数概率(log probability) log ⁡ p θ ( a t n ∣ s t n ) \log p_\theta\left(a_t^n \mid s_t^n\right) logpθ(atnstn) 。对这个概率取梯度,在梯度前面乘一个权重,权重就是这场游戏的奖励。我们计算出梯度后,就可以更新模型。

∇ R ˉ θ = 1 N ∑ n = 1 N ∑ t − 1 T n R ( τ n ) ∇ log ⁡ p θ ( a t n ∣ s t n ) \nabla \bar{R}_\theta=\frac{1}{N} \sum_{n=1}^N \sum_{t-1}^{T_n} R\left(\tau^n\right) \nabla \log p_\theta\left(a_t^n \mid s_t^n\right) Rˉθ=N1n=1Nt1TnR(τn)logpθ(atnstn)
在这里插入图片描述

三、详细资料

关于更加详细的策略梯度算法介绍,请看我之前发的博客:【EasyRL学习笔记】第四章 Policy Gradient 策略梯度

在学习策略梯度算法前你最好能了解以下知识点:

  • 全连接神经网络
  • 神经网络求解分类问题
  • 神经网络基本工作原理

四、Python代码实战

4.1 运行前配置

准备好一个RL_Utils.py文件,文件内容可以从我的一篇里博客获取:【RL工具类】强化学习常用函数工具类(Python代码)

这一步很重要,后面需要引入该RL_Utils.py文件

在这里插入图片描述

4.2 主要代码

import argparse
import datetime
import time
from collections import deque

from torch.distributions import Bernoulli
from torch.autograd import Variable
import gym
from torch import nn

# 这里需要改成自己的RL_Utils.py文件的路径
from Python.ReinforcementLearning.EasyRL.RL_Utils import *


class MemoryQueue:
    def __init__(self):
        self.buffer = deque()

    def push(self, transitions):
        self.buffer.append(transitions)

    def sample(self):
        batch = list(self.buffer)
        return zip(*batch)

    def clear(self):
        self.buffer.clear()

    def __len__(self):
        return len(self.buffer)


# 策略网络(全连接网络)
class DNN(nn.Module):
    def __init__(self, input_dim, output_dim, hidden_dim=128):
        """ 初始化策略网络,为全连接网络
            input_dim: 输入的特征数即环境的状态维度
            output_dim: 输出的动作维度
        """
        super(DNN, self).__init__()
        self.fc1 = nn.Linear(input_dim, hidden_dim)  # 输入层
        self.fc2 = nn.Linear(hidden_dim, hidden_dim)  # 隐藏层
        self.fc3 = nn.Linear(hidden_dim, output_dim)  # 输出层

    def forward(self, x):
        # 各层对应的激活函数
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        return torch.sigmoid(self.fc3(x))


# PolicyGradient智能体对象
class PolicyGradient:
    def __init__(self, model, memory, arg_dict):
        # 未来奖励衰减因子
        self.gamma = arg_dict['gamma']
        self.device = torch.device(arg_dict['device'])
        self.memory = memory
        # 策略网络
        self.policy_net = model.to(self.device)
        # 优化器
        self.optimizer = torch.optim.RMSprop(self.policy_net.parameters(), lr=arg_dict['lr'])

    def sample_action(self, state):
        state = torch.from_numpy(state).float()
        state = Variable(state)
        probs = self.policy_net(state.to(self.device))
        m = Bernoulli(probs)  # 伯努利分布
        action = m.sample()
        action = int(action.item())  # 转为标量
        return action

    def predict_action(self, state):

        state = torch.from_numpy(state).float()
        state = Variable(state)
        probs = self.policy_net(state.to(self.device))
        m = Bernoulli(probs)  # 伯努利分布
        action = m.sample()
        action = int(action.item())  # 转为标量
        return action

    def update(self):
        state_pool, action_pool, reward_pool = self.memory.sample()
        state_pool, action_pool, reward_pool = list(state_pool), list(action_pool), list(reward_pool)
        # 对奖励进行修正,考虑未来,并加入衰减因子
        running_add = 0
        for i in reversed(range(len(reward_pool))):
            if reward_pool[i] == 0:
                running_add = 0
            else:
                running_add = running_add * self.gamma + reward_pool[i]
                reward_pool[i] = running_add

        reward_mean = np.mean(reward_pool)  # 求奖励均值
        reward_std = np.std(reward_pool)  # 求奖励标准差
        for i in range(len(reward_pool)):
            # 标准化奖励
            reward_pool[i] = (reward_pool[i] - reward_mean) / reward_std

        # 梯度下降
        self.optimizer.zero_grad()
        for i in range(len(reward_pool)):
            state = state_pool[i]
            action = Variable(torch.FloatTensor([action_pool[i]]))
            reward = reward_pool[i]
            state = Variable(torch.from_numpy(state).float())
            probs = self.policy_net(state.to(self.device))
            m = Bernoulli(probs)
            # 加权(reward)损失函数,加负号(将最大化问题转化为最小化问题)
            loss = -m.log_prob(action.to(self.device)) * reward
            loss.backward()
        self.optimizer.step()
        self.memory.clear()

    def save_model(self, path):
        Path(path).mkdir(parents=True, exist_ok=True)
        torch.save(self.policy_net.state_dict(), path + 'checkpoint.pt')

    def load_model(self, path):
        self.policy_net.load_state_dict(torch.load(path + 'checkpoint.pt'))


# 训练函数
def train(arg_dict, env, agent):
    # 开始计时
    startTime = time.time()
    print(f"环境名: {arg_dict['env_name']}, 算法名: {arg_dict['algo_name']}, Device: {arg_dict['device']}")
    print("开始训练智能体......")
    # 记录每个epoch的奖励
    rewards = []
    for epoch in range(arg_dict['train_eps']):
        state = env.reset()
        ep_reward = 0
        for _ in range(arg_dict['ep_max_steps']):
            # 画图
            if arg_dict['train_render']:
                env.render()
            # 采样
            action = agent.sample_action(state)
            # 执行动作,获取下一个状态、奖励和结束状态
            next_state, reward, done, _ = env.step(action)
            ep_reward += reward
            # 如果回合结束,则奖励为0
            if done:
                reward = 0
            # 讲采样的数据存起来
            agent.memory.push((state, float(action), reward))
            # 更新状态:当前状态等于下一个状态
            state = next_state
            # 如果回合结束,则跳出循环
            if done:
                break
        if (epoch + 1) % 10 == 0:
            print(f"Epochs:{epoch + 1}/{arg_dict['train_eps']}, Reward:{ep_reward:.2f}")
        # 每采样几个回合就对智能体做一次更新
        if (epoch + 1) % arg_dict['update_fre'] == 0:
            agent.update()
        rewards.append(ep_reward)
    print('训练结束 , 用时: ' + str(time.time() - startTime) + " s")
    # 关闭环境
    env.close()
    return {'episodes': range(len(rewards)), 'rewards': rewards}


# 测试函数
def test(arg_dict, env, agent):
    startTime = time.time()
    print("开始测试智能体......")
    print(f"环境名: {arg_dict['env_name']}, 算法名: {arg_dict['algo_name']}, Device: {arg_dict['device']}")
    # 记录每个epoch的奖励
    rewards = []
    for epoch in range(arg_dict['test_eps']):
        state = env.reset()
        ep_reward = 0
        for _ in range(arg_dict['ep_max_steps']):
            # 画图
            if arg_dict['test_render']:
                env.render()
            action = agent.predict_action(state)
            next_state, reward, done, _ = env.step(action)
            ep_reward += reward
            if done:
                reward = 0
            state = next_state
            if done:
                break
        print(f"Epochs: {epoch + 1}/{arg_dict['test_eps']},Reward: {ep_reward:.2f}")
        rewards.append(ep_reward)
    print("测试结束 , 用时: " + str(time.time() - startTime) + " s")
    env.close()
    return {'episodes': range(len(rewards)), 'rewards': rewards}


# 创建环境和智能体
def create_env_agent(arg_dict):
    # 创建环境
    env = gym.make(arg_dict['env_name'])
    # 设置随机种子
    all_seed(env, seed=arg_dict["seed"])
    # 获取状态数
    try:
        n_states = env.observation_space.n
    except AttributeError:
        n_states = env.observation_space.shape[0]
    # 获取动作数
    n_actions = env.action_space.n
    print(f"状态数: {n_states}, 动作数: {n_actions}")
    # 将状态数和动作数加入算法参数字典
    arg_dict.update({"n_states": n_states, "n_actions": n_actions})
    model = DNN(n_states, 1, hidden_dim=arg_dict['hidden_dim'])
    memory = MemoryQueue()
    # 实例化智能体对象
    agent = PolicyGradient(model, memory, arg_dict)
    # 返回环境,智能体
    return env, agent


if __name__ == '__main__':
    # 防止报错 OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
    os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
    # 获取当前路径
    curr_path = os.path.dirname(os.path.abspath(__file__))
    # 获取当前时间
    curr_time = datetime.datetime.now().strftime("%Y_%m_%d-%H_%M_%S")
    # 相关参数设置
    parser = argparse.ArgumentParser(description="hyper parameters")
    parser.add_argument('--algo_name', default='PolicyGradient', type=str, help="name of algorithm")
    parser.add_argument('--env_name', default='CartPole-v0', type=str, help="name of environment")
    parser.add_argument('--train_eps', default=200, type=int, help="episodes of training")
    parser.add_argument('--test_eps', default=20, type=int, help="episodes of testing")
    parser.add_argument('--ep_max_steps', default=100000, type=int,
                        help="steps per episode, much larger value can simulate infinite steps")
    parser.add_argument('--gamma', default=0.99, type=float, help="discounted factor")
    parser.add_argument('--lr', default=0.01, type=float, help="learning rate")
    parser.add_argument('--update_fre', default=10, type=int)
    parser.add_argument('--hidden_dim', default=36, type=int)
    parser.add_argument('--device', default='cpu', type=str, help="cpu or cuda")
    parser.add_argument('--seed', default=520, type=int, help="seed")
    parser.add_argument('--show_fig', default=False, type=bool, help="if show figure or not")
    parser.add_argument('--save_fig', default=True, type=bool, help="if save figure or not")
    parser.add_argument('--train_render', default=False, type=bool,
                        help="Whether to render the environment during training")
    parser.add_argument('--test_render', default=True, type=bool,
                        help="Whether to render the environment during testing")
    args = parser.parse_args()
    default_args = {'result_path': f"{curr_path}/outputs/{args.env_name}/{curr_time}/results/",
                    'model_path': f"{curr_path}/outputs/{args.env_name}/{curr_time}/models/",
                    }
    # 将参数转化为字典 type(dict)
    arg_dict = {**vars(args), **default_args}
    print("算法参数字典:", arg_dict)

    # 创建环境和智能体
    env, agent = create_env_agent(arg_dict)
    # 传入算法参数、环境、智能体,然后开始训练
    res_dic = train(arg_dict, env, agent)
    print("算法返回结果字典:", res_dic)
    # 保存相关信息
    agent.save_model(path=arg_dict['model_path'])
    save_args(arg_dict, path=arg_dict['result_path'])
    save_results(res_dic, tag='train', path=arg_dict['result_path'])
    plot_rewards(res_dic['rewards'], arg_dict, path=arg_dict['result_path'], tag="train")

    # =================================================================================================
    # 创建新环境和智能体用来测试
    print("=" * 300)
    env, agent = create_env_agent(arg_dict)
    # 加载已保存的智能体
    agent.load_model(path=arg_dict['model_path'])
    res_dic = test(arg_dict, env, agent)
    save_results(res_dic, tag='test', path=arg_dict['result_path'])
    plot_rewards(res_dic['rewards'], arg_dict, path=arg_dict['result_path'], tag="test")

4.3 运行结果展示

由于有些输出太长,下面仅展示部分输出

状态数: 4, 动作数: 2
环境名: CartPole-v0, 算法名: PolicyGradient, Device: cpu
开始训练智能体......
Epochs:10/200, Reward:10.00
Epochs:20/200, Reward:14.00
Epochs:30/200, Reward:24.00
Epochs:40/200, Reward:51.00
Epochs:50/200, Reward:77.00
Epochs:60/200, Reward:142.00
Epochs:70/200, Reward:58.00
Epochs:80/200, Reward:55.00
Epochs:90/200, Reward:110.00
Epochs:100/200, Reward:114.00
Epochs:110/200, Reward:39.00
Epochs:120/200, Reward:200.00
Epochs:130/200, Reward:180.00
Epochs:140/200, Reward:200.00
Epochs:150/200, Reward:200.00
Epochs:160/200, Reward:200.00
Epochs:170/200, Reward:200.00
Epochs:180/200, Reward:200.00
Epochs:190/200, Reward:200.00
Epochs:200/200, Reward:125.00
训练结束 , 用时: 19.086568593978882 s
============================================================================================================================================================================================================================================================================================================
状态数: 4, 动作数: 2
开始测试智能体......
环境名: CartPole-v0, 算法名: PolicyGradient, Device: cpu
Epochs: 1/20,Reward: 200.00
Epochs: 2/20,Reward: 200.00
Epochs: 3/20,Reward: 200.00
Epochs: 4/20,Reward: 200.00
Epochs: 5/20,Reward: 200.00
Epochs: 6/20,Reward: 200.00
Epochs: 7/20,Reward: 200.00
Epochs: 8/20,Reward: 200.00
Epochs: 9/20,Reward: 200.00
Epochs: 10/20,Reward: 200.00
Epochs: 11/20,Reward: 200.00
Epochs: 12/20,Reward: 200.00
Epochs: 13/20,Reward: 200.00
Epochs: 14/20,Reward: 200.00
Epochs: 15/20,Reward: 200.00
Epochs: 16/20,Reward: 200.00
Epochs: 17/20,Reward: 200.00
Epochs: 18/20,Reward: 200.00
Epochs: 19/20,Reward: 200.00
Epochs: 20/20,Reward: 200.00
测试结束 , 用时: 33.2643027305603 s

在这里插入图片描述

在这里插入图片描述

4.4 关于可视化的设置

如果你觉得可视化比较耗时,你可以进行设置,取消可视化。
或者你想看看训练过程的可视化,也可以进行相关设置

在这里插入图片描述