Man pages for MartinKies/USLR
Reinforcement Learning with R

Act.A3CDetermines which action the algorithm takes
Action.2.Choice.PDAction to Array for Prisoners Dilemma
Action.Encoding.Info.PDGet Info of Action Encoding
Action.Encoding.Info.Simple.GameGet Info of Action Encoding
Act.QlearningDetermines which action to take
Act.Qlearning.oldDetermines which action to take
Act.QLearningPersDetermines which action to take
Act.QLearning.SurpriseDetermines which action to take
Act.QPathingDetermines which action to take
Act.QPredictionsDetermines which action to take
Advantage.functionCalculates N-Step Returns or weighted Temporal Difference...
Alphabet3A student strategy
Antitiktak1A student strategy
a.tadaaa.1A student strategy
Calc.Endstate.Value.QPredictionsCalculates Endstate value
Calc.Reward.QPathingCalc.Reward.QPathing
Calc.Reward.QPredictionsCalc.Reward.QPredictions
Calc.Reward.QPredictions.expectedQCalculate Expected Value based on action
Calc.Reward.QPredictions.expectedRewardCalculate Expected immediate Reward based on action
Choice.2.Action.PDArray to Action for Prisoners Dilemma
Choice.2.Action.Simple.GameArray to Action for Simple Game
Convert.2.trainConverts stored Memory into arrays.
Define_GraphGraph for Network Loss according to A3C.
Define_Graph_Gradient_UpdateGraph to update Network weights
Encode.Game.States.PDTransforms List of Gamestates to std encoding form
Extend.Memory.QlearningExtend Memory by specified experiences
Extend.Memory.Qlearning.oldExtend Memory by specified experiences
Extend.Memory.QLearningPersExtend Memory by specified experiences
Extend.Memory.QLearning.SurpriseExtend Memory by specified experiences
Extend.Memory.QPathingExtend Memory by specified experiences
Extend.Memory.QPredictionsExtend Memory by specified experiences
false.friendA student strategy
fix.price.locExample srategy for the Hotelling game
Generate.Start.State.PDGenerates Start State for Prisoners Dilemma Game
Generate.Start.State.Simple.GameGenerates Start State for Simple Game
Get.Def.Par.A3CGet Default Parameters of A3C.
Get.Def.Par.Neural.NetworkDefine default Parameters of the Neural Network Function
Get.Def.Par.Neural.Network.A3CGet Default Parameters of the Feed-Forward Neural Network for...
Get.Def.Par.Neural.Network.A3C.LSTMGet Default Parameters of the LSTM Neural Network for the A3C...
Get.Def.Par.QLearningDelivers some default Parameters of Q-learning
Get.Def.Par.QLearning.oldDelivers some default Parameters of Q-learning
Get.Def.Par.QLearningPersDelivers some default Parameters of Q-learning
Get.Def.Par.QLearning.SurpriseDelivers some default Parameters of Q-learning
Get.Def.Par.QPathingDelivers some default Parameters of Q-learning
Get.Def.Par.QPredictionsDelivers some default Parameters of Q-Predictions
Get.Def.Par.RNNDefine default Parameters of the RNN Function
Get.Game.Object.PDGet Game Object which fully defines Prisoners Dilemma.
Get.Game.Object.Simple.GameGet Game Object which fully defines simple game.
Get.Game.Param.PDStandard Parameters of Repeated Prisoners Dilemma Returns a...
Get.Par.PDDefines model parameters for 'Prisoners Dilemma'
Get.Par.Simple.GameDefines model parameters for 'Simple Game'
getrichA student strategy
Globaler.Tit.4.TatA student strategy
Hybrid.Predict.Action.Values.QPathingGenerates best guesses based on Experience
Hybrid.Predict.Action.Values.QPredictionsGenerates best guesses based on Experience
Initialise.A3CSet changeable A3C Parameters.
Initialise.QlearningSet changeable model variables
Initialise.Qlearning.oldSet changeable model variables
Initialise.QLearningPersSet changeable model variables
Initialise.QLearning.SurpriseSet changeable model variables
Initialise.QPathingSet changeable model variables
Initialise.QPredictionsSet changeable model variables
into.spaaaaceA grad student strategy
meineStrat2A student strategy
Memory.Random.Play.PDGenerate Memory where strategies play against a random...
Memory.Self.Play.PDGenerate Memory where strategies play against themselves
Model.strat.maximum.full.TenA strategy to be used after model has been trained
my.antistrat2A student strategy Answers strat2 [0.566 after 1000 Rounds]
nashtag1A student strategy
NN.strat.full.zeroA strategy to be used after model has been trained
NN.strat.mainThe actual strategy after model has been trained
NN.strat.Slim.TenTenA strategy to be used after model has been trained
NN.strat.Slim.TenTen.QLearningA strategy to be used after model has been trained
NN.strat.static.end.TenA strategy to be used after model has been trained
nottitfortatA student strategy
phasesA student strategy
Play.Multiple.Games.QLearningPersTrain multiple games
Play.On.Strategy.QLearningPersPlay the game based on strategy
Predict.Neural.NetworkEvaluate Neural Network
Predict.Neural.Network.A3CPredict Neural Network
Predict.RNNEvaluate Recurrent RNN
prep.data.4.shinyPrepare Worker Memory to visualize with shiny
prof.stratA student strategy
pudb.strat2A student strategy
Q.on.hist.PD.QLearningQ-values based on history of IPD
Q.on.hist.PD.QLearning.SurpriseQ-values based on history of IPD
rainbow.unicorn.antistrat2A student strategy
redim.stateChange dimensionality of the state array.
Replay.QlearningTrain model of Q learning
Replay.Qlearning.oldTrain model of Q learning
Replay.QLearningPersTrain model of Q learning
Replay.QLearning.SurpriseTrain model of Q learning
Replay.QPathingTrain model of Q Pathing
Replay.QPredictionsTrain model of Q Pathing
schachmatt_tournamentA student strategy
screams.in.spaceA grad student strategy
seda.strat2A student strategy
Setup.Neural.NetworkSetup a Neural Network
Setup.Neural.Network.A3CSetup a Feed-Forward Neural Network for the...
Setup.Neural.Network.A3C.LSTMSetup a Neural Network with an LSTM-Layer for the...
Setup.QLearningSets up a model based on model parameters
Setup.QLearning.oldSets up a model based on model parameters
Setup.QLearningPersSets up a model based on model parameters
Setup.QLearning.SurpriseSets up a model based on model parameters
Setup.QPathingQ-Pathing is rather similar to Q-learning but we have...
Setup.QPredictionsQ-Predictions is rather similar to Q-learning but we use a...
Setup.RNNSetup a RNN
squishy.the.octopusA student strategy
State.2.Array.PDState to Array for Prisoners Dilemma
State.2.Array.Simple.GameState to Array for Simple Game
State.Transition.PDGet next State of Prisoners Dilemma Game
State.Transition.Simple.GameGet next State of Simple Game
strat1A student strategy
strat2A student strategy
strat3A student strategy
strat4A student strategy
strategoA student strategy
ta.daaaA student strategy
TikTak1A student strategy
Train.A3cUse the A3C algorithm to train a model
Train.Neural.NetworkTrain Neural Network
Train.On.Memory.QLearningTrains model based on memory
Train.On.Memory.QLearningPersTrains model based on memory
Train.On.Memory.QLearning.SurpriseTrains model based on memory
Train.QLearningTrain a model based on Q-Learning
Train.QLearning.oldTrain a model based on Q-Learning
Train.QLearningPersTrain a model based on Q-Learning
Train.QLearning.SurpriseTrain a model based on Q-Learning
Train.QPathingTrain a model based on Q-Learning
Train.QPredictionsTrain a model based on Q-Learning
Train.RNNTrain RNN
traveling.salesmanExample srategy for the Hotelling game
Update.Evaluator.QLearningPersControlled Copying of Models
Update.Memory.QLearningAdd historic Q-Values to memory
Update.Memory.QLearningPersAdd historic Q-Values to memory
Update.Memory.QLearning.SurpriseAdd historic Q-Values to memory
Update.Net.QPathingInternal Function
Update.Net.QPredictionsInternal Function
viva.PD.StrategyA student strategy
Worker.A3CDefines an Agent based on the A3C-Algorithm
MartinKies/USLR documentation built on July 15, 2018, 5:49 p.m.