README.md

RLR

Reinforcement Learning with R

This R package has the goal to bring some known - and newly developed - Machine Learning algorithms to R.

Current developement focuses on examining and analysing different learning algorithms, such as Q-Learning or A3C, in the context of finding best answers to the repeated prisoners dilemma (see the package skranz/StratTourn). This package provides the newly developed features discussed in the dissertation of Martin Kies (to be published). Syntax may change at any time as this repository is in ongoing development - update with precaution!

There are two showcases with which an easy start is possible: * Showcase Improved Q-Learning with Gradient Boosting.R shows the standard case if one wants to has a reasonable fast estimate about the stability of a strategy with gradient boosting. * Showcase Improved Q-Learning with RNN-LSTM.R is identical, except that it is already shown which parameters have to be changed if one wants to play a tournament. Here, everything is already parameterized so that an RNN with LSTM Cells is trained.

Defining the interface between games and learning algorithms:

Every Game has to provide the following function:

Get.Game.Object.() which returns a list (called "game.object") which should have the following list elements:

The following elements are optional and may not be supported by all games: * memory.self.play - A function which generates memories - based on the encoding - given that strategies play against themselves.

Other list elements may be used by the game functions itself. The following items are recommended:

Note that structurally a game.object should never have information regarding any game.states.



MartinKies/USLR documentation built on Nov. 10, 2019, 5:24 a.m.