1.5.1 • Published 6 years ago

reinforce-js v1.5.1

Weekly downloads
7
License
MIT
Repository
github
Last release
6 years ago

reinforce-js

Build Status Build status js-google-style

Call For Volunteers: Due to my lack of time, I'm desperately looking for voluntary help. Should you be interested in building reinforcement agents (even though you're a newbie) and willing to develop this educational project a little further, please contact me :) There are some points on the agenda, that I'd still like to see implemented to make this project a nice library for abstract educational purposes.

INACTIVE: Due to lack of time and help

reinforce-js a collection of various simple reinforcement learning solver. This library is for educational purposes only. The library is an object-oriented approach and tries to deliver simplified interfaces that make using the algorithms pretty easy (baked with Typescript). More over it is an extension of Andrej Karpathy's reinforcement learning library that implements several common RL algorithms. In particular, the library currently includes:

  • Deep Q-Learning for Q-Learning with function approximation with Neural Networks (DQNSolver Details and related Google DeepMind Paper)
  • Dynamic Programming methods
  • (Tabular) Temporal Difference Learning (SARSA/Q-Learning)
  • Stochastic/Deterministic Policy Gradients and Actor Critic architectures for dealing with continuous action spaces. (very alpha, likely buggy or at the very least finicky and inconsistent)

For Production Use

What does the Library offer?

Currently exposed Classes:

DQN-Solver

Code-Example and General Information

  • DQNSolver - Concrete Deep Q-Learning Solver
    • This class is containing the main Deep Q-Learning algorithm of the DeepMind paper. On instantiation it needs to be configured with two configuration objects. It is an algorithm, which has minimum knowledge of its environment. The behavior of the algorithm can be tuned via its hyperparamters (DQNOpt).
    • The Deep Q-Learning algorithm is designed to have a certain universality, since its reasoning is just depending on a environmental perception and an environmental feedback.
    • The learning-agents-implementation shows that the DQNSolver can also be designed in such a way, that its agent has a maximum autonomy by establishing its own reward-scheme.
  • DQNOpt - Concrete options of DQNSolver
    • This class is for the configuration of the DQNSolver. It holds all the hyperparameter for the DQNSolver. For the detailed initialization please see the General Information.
  • DQNEnv - Concrete environment of DQNSolver
    • This class is for the configuration of the DQNSolver. It holds the boundary-measures of the environment, in which the DQNSolver should operate. For the detailed initialization please see the General Information.
  • Example Application: Learning Agents (GitHub Page)

TD-Solver (not tested)

Code-Example

  • TDSolver - Concrete Temporal Difference Solver
  • TDOpt - Concrete Options for TDSolver creation
  • TDEnv - Concrete Environment for TDSolver creation

Planned to be implemented:

  • DPSolver - Concrete Dynamic Programming Solver
  • DPOpt - Concrete Options for DPSolver creation
  • SimpleReinforcementSolver - Concrete Simple Reinforcement Solver
  • SimpleReinforcementOpt - Concrete Options for SimpleReinforcementSolver creation
  • RecurrentReinforcementSolver - Concrete Recurrent Reinforcement Solver
  • RecurrentReinforcementOpt - Concrete Options for RecurrentReinforcementSolver creation
  • DeterministPGSolver - Concrete Deterministic Policy Gradient Solver
  • DeterministPGOpt - Concrete Options for DeterministPGSolver creation

How to install as a dependency:

Download available @npm: reinforce-js

Install via command line:

npm install --save reinforce-js@latest

The project directly ships with the transpiled Javascript code. And for TypeScript development it also contains Map-files and Declaration-files.

How to import?

These classes can be imported from this npm module, e.g.:

import { DQNSolver, DQNOpt, DQNEnv } from 'reinforce-js';

For JavaScript usage require classes from this npm module as follows:

const DQNSolver = require('reinforce-js').DQNSolver;
const DQNOpt = require('reinforce-js').DQNOpt;
const DQNEnv = require('reinforce-js').DQNEnv;

Example Application

For the DQN-Solver please visit Learning Agents (GitHub Page).

Community Contribution

Everybody is more than welcome to contribute and extend the functionality!

Please feel free to contribute to this project as much as you wish to.

  1. clone from GitHub via git clone https://github.com/mvrahden/reinforce-js.git
  2. cd into the directory and npm install for initialization
  3. Try to npm run test. If everything is green, you're ready to go :sunglasses:

Before triggering a pull-request, please make sure that you've run all the tests via the testing command:

npm run test

This project relies on Visual Studio Codes built-in Typescript linting facilities. Let's follow primarily the Google TypeScript Style-Guide through the included tslint-google.json configuration file.

Dependencies

This Library relies on the object-oriented Deep Recurrent Neural Network library:

Work in Progress

Please be aware that this repository is still under construction. Changes are likely to happen. There are still classes to be added, e.g. DPSolver, SimpleReinforcementSolver, RecurrentReinforcementSolver, DeterministPG and their individual Opts and Envs

License

As of License-File: MIT

1.5.1

6 years ago

1.4.17

6 years ago

1.4.16

6 years ago

1.4.15

6 years ago

1.4.14

6 years ago

1.4.13

6 years ago

1.4.12

6 years ago

1.4.10

6 years ago

1.4.9

6 years ago

1.4.8

6 years ago

1.4.7

6 years ago

1.4.6

6 years ago

1.4.5

6 years ago

1.4.3

6 years ago

1.4.2

6 years ago

1.4.0

6 years ago

1.3.10

6 years ago

1.3.8

6 years ago

1.3.7

6 years ago

1.3.6

6 years ago

1.3.5

6 years ago

1.3.4

6 years ago

1.3.2

6 years ago

1.3.1

6 years ago

1.3.0

6 years ago

1.2.5

6 years ago

1.2.4

6 years ago

1.2.3

6 years ago

1.2.2

6 years ago

1.2.1

6 years ago

1.2.0

6 years ago