Extend.Memory.QLearningPersExpPath: Extend Memory by specified experiences

Description Usage Arguments

View source: R/QlearningPersExpPath.R

Description

Returns modified algo.var, where memory has been extended as specified.

Usage

1
2
Extend.Memory.QLearningPersExpPath(algo.var, algo.par = NULL,
  game.object, memory.type, memory.param = NULL, model.par = NULL)

Arguments

algo.var

A variable algorithm object, where to be modified variables are saved. Given by Initialise.QLearningPersExpPath()

game.object

A game object as defined by Get.Game.Object.<Name>.

memory.param

Parameters necessary for the chosen memory.type.

model.par

Parameters of model (i.e. Neural Network). Currently only used for RNNs mask value.

memory.init

Which type of extension should take place? The following types are supported

  • self.play The other strategies play against themselves - to understand possible secret handshakes. If I am myself part of the other strategies, the "self" strategy is ignored. The following memory.param are expected:

    • no How often should the other strategies play against themselves?

  • solid.foundation Not only self.play, but also a random initialisation with increasing defect probabilities.The following memory.param are expected:

    • self.no How often should the other strategies play against themselves?

    • rep.no How often should a random strategy be played? The defection probability is linearly increased.

If combinations of different memories are needed, one can use the function multiple times.


MartinKies/USLR documentation built on Nov. 10, 2019, 5:24 a.m.