AI, Blog, Machine Learning

“Introducing ‘Pearl’: Pioneering the Next Wave of Intelligent, Production-Ready Agents”

Lets Explore How Pearl is Transforming Theory into Practice:

Lets Learn how to use it 

To install Pearl, you can simply clone this repo and pip install

git clone https://github.com/facebookresearch/Pearl.git
cd Pearl
pip install -e .

Let’s Start

To kick off a Pearl agent with a classic reinforcement learning environment, here’s a quick example.

First things first, lets import the necessary libraries

from pearl.pearl_agent import PearlAgent
from pearl.action_representation_modules.one_hot_action_representation_module import (
    OneHotActionTensorRepresentationModule,
)
from pearl.policy_learners.sequential_decision_making.deep_q_learning import (
    DeepQLearning,
)
from pearl.replay_buffers.sequential_decision_making.fifo_off_policy_replay_buffer import (
    FIFOOffPolicyReplayBuffer,
)
from pearl.utils.instantiations.environments.gym_environment import GymEnvironment

env = GymEnvironment("CartPole-v1")
num_actions = env.action_space.n
agent = PearlAgent(
    policy_learner=DeepQLearning(
        state_dim=env.observation_space.shape[0],
        action_space=env.action_space,
        hidden_dims=[64, 64],
        training_rounds=20,
        action_representation_module=OneHotActionTensorRepresentationModule(
            max_number_actions=num_actions
        ),
    ),
    replay_buffer=FIFOOffPolicyReplayBuffer(10_000),
)
observation, action_space = env.reset()
agent.reset(observation, action_space)
done = False
while not done:
    action = agent.act(exploit=False)
    action_result = env.step(action)
    agent.observe(action_result)
    agent.learn()
    done = action_result.done

As per facebook,

More detailed tutorial will be presented at NeurIPS 2023 EXPO presentation (12/10/2023, 4 pm to 6 pm). Users can replace the environment with any real-world problems.

Github Link 

Leave a Reply