Pearl: Parallel Evolutionary and Reinforcement Learning Library
An Open Source Research Toolbox
Reinforcement learning (RL) has had a lot of success when it comes to optimizing agents in an environment with a reward structure. Examples include the impressive algorithms able to beat the best players in the world in games like Dota 2 or chess. More recently, evolutionary computation (EC) algorithms have also been successful with similar performance to the generally more complex RL. Whilst there exist many open-source RL and EC libraries, no publicly available library combines the two approaches for enhanced comparison, cooperation or visualization.
Pearl π¦ͺ is a PyTorch based package with the goal of being excellent for rapid prototyping of new adaptive decision-making algorithms in the intersection between RL and EC. In this article, I want to introduce the library and detail some key features.
- The library itself is hosted on GitHub here.
- A technical report going through further details can be found on arXiv here.
- An interactive tutorial using Google Colab can be found here.
Key Features
- β RL algorithms, EC algorithms and hybrid algorithms (combining RL and EC) can all be implemented from a single base class.
- β All agents made to be compatible with OpenAI gym environments.
- β Multi agent support for faster training.
- β Tensorboard integration for real-time analysis.
- β Modular and extensible components with type hints and function docstrings.
- β Opinionated module settings grouped together by dataclasses.
- β Custom callbacks to inject unique logic to your algorithm.
- β Unit tests implemented with 92% coverage.
- β Flexible and powerful neural network models.
- β Command line interface for running implemented agents as a demonstration and for visualizing complicated plots.
An Example
Now you can directly visualize and compare your results between RL and EC algorithms using the same software π€©