Files in MartinKies/RLR
Reinforcement Learning with R

.gitattributes
DESCRIPTION
NAMESPACE
R/AsynchronousAdvantageActorCritic.R R/CuriosityFunctions.R R/DefaultFunctions.R R/HelpfullFunctions.R R/HotellingStratTourn.R R/NeuralNetwork.R R/PDstrategies.R R/PrisonersDilemmaStratTourn.R R/QlearningPersExpPath.R R/RFWager.R R/RNN.R R/SimpleGame.R R/ThesisFunctions.R R/XGBoost.R R/importPackages.R README.md Showcase Improved Q-Learning with Gradient Boosting.R Showcase Improved Q-Learning with RNN-LSTM.R Showcase Normal Q-Learning with NN.R man/Act.A3C.Rd man/Act.QLearningPersExpPath.Rd man/Action.2.Choice.PD.Rd man/Action.Encoding.Info.PD.Rd man/Action.Encoding.Info.Simple.Game.Rd man/Advantage.function.Rd man/Calc.R.phi.Rd man/Choice.2.Action.PD.Rd man/Choice.2.Action.Simple.Game.Rd man/Convert.2.train.Rd man/Define_Graph.Rd man/Define_Graph_Gradient_Update.Rd man/Discounted.Reward.PD.Rd man/Encode.Game.States.PD.Rd man/Encoding.Harper.PD.Rd man/Encoding.Manager.PD.Rd man/Encoding.last.X.rounds.PD.Rd man/Extend.Memory.QLearningPersExpPath.Rd man/External.Eval.PD.Rd man/Generate.Start.State.PD.Rd man/Generate.Start.State.Simple.Game.Rd man/Get.Def.Par.A3C.Rd man/Get.Def.Par.NN.Legacy.Thesis.Basic.Rd man/Get.Def.Par.NN.Legacy.v.0.1.6.Rd man/Get.Def.Par.Neural.Network.A3C.LSTM.Rd man/Get.Def.Par.Neural.Network.A3C.Rd man/Get.Def.Par.Neural.Network.Rd man/Get.Def.Par.QLearningPersExpPath.Legacy.ThesisOpt.RNN.Rd man/Get.Def.Par.QLearningPersExpPath.Legacy.ThesisOpt.XGB.Rd man/Get.Def.Par.QLearningPersExpPath.Legacy.v.0.1.6.Rd man/Get.Def.Par.QLearningPersExpPath.QLearning.Basic.Rd man/Get.Def.Par.QLearningPersExpPath.Rd man/Get.Def.Par.RNN.Legacy.ThesisOpt.Rd man/Get.Def.Par.RNN.Legacy.v.0.1.6.Rd man/Get.Def.Par.RNN.Rd man/Get.Def.Par.XGBoost.Legacy.ThesisOpt.Rd man/Get.Def.Par.XGBoost.Legacy.v.0.1.6.Rd man/Get.Def.Par.XGBoost.Rd man/Get.Game.Object.PD.Rd man/Get.Game.Object.Simple.Game.Rd man/Get.Game.Param.PD.Legacy.BattleOfStrategies2013.Baseline.Rd man/Get.Game.Param.PD.Legacy.BattleOfStrategies2019.Rd man/Get.Game.Param.PD.Rd man/Get.Par.PD.Rd man/Get.Par.Simple.Game.Rd man/Initialise.A3C.Rd man/Initialise.QLearningPersExpPath.Rd man/Memory.Random.Play.PD.Rd man/Memory.Self.Play.PD.Rd man/Model.strat.maximum.full.Ten.Rd man/NN.strat.Slim.TenTen.QLearning.Rd man/NN.strat.Slim.TenTen.Rd man/NN.strat.full.zero.Rd man/NN.strat.main.Rd man/NN.strat.static.end.Ten.Rd man/PID.controller.Rd man/Play.Multiple.Games.QLearningPersExpPath.Rd man/Play.On.Strategy.QLearningPersExpPath.Rd man/Predict.Neural.Network.A3C.Rd man/Predict.Neural.Network.Rd man/Predict.RNN.Rd man/Q.on.hist.PD.QLearning.Rd man/Replay.QLearningPersExpPath.Rd man/Setup.Neural.Network.A3C.LSTM.Rd man/Setup.Neural.Network.A3C.Rd man/Setup.Neural.Network.Rd man/Setup.QLearningPersExpPath.Rd man/Setup.RNN.Rd man/State.2.Array.PD.Rd man/State.2.Array.Simple.Game.Rd man/State.Transition.PD.Rd man/State.Transition.Simple.Game.Rd man/Train.A3c.Rd man/Train.Neural.Network.Rd man/Train.On.Memory.QLearningPersExpPath.Rd man/Train.QLearningPersExpPath.Rd man/Train.RNN.Rd man/Update.Evaluator.QLearningPersExpPath.Rd man/Update.Memory.QLearningPersExpPath.Rd man/Weighted.Discount.Rd man/Worker.A3C.Rd man/compare.exploration.Rd man/counter.strat.a.Rd man/counter.strat.c.Rd man/counter.strat.d.Rd man/counter.strat.e.Rd man/counter.strat.f.Rd man/counter.strat.g.Rd man/counter.strat.h.Rd man/counter.strat.i.Rd man/fix.price.loc.Rd man/get.against.itself.benchmark.Rd man/get.antistrat.Rd man/get.benchmark.Rd man/get.conversion.Rd man/net.nice.minus1.Rd man/net.nice.start1.Rd man/net.nice0.Rd man/net.nice1.Rd man/prep.data.4.shiny.Rd man/redim.state.Rd man/smooth.average.Rd man/smooth.triangle.Rd man/strat.a.Rd man/strat.b.Rd man/strat.c.Rd man/strat.d.Rd man/strat.e.Rd man/strat.f.Rd man/strat.g.Rd man/strat.h.Rd man/strat.i.Rd
MartinKies/RLR documentation built on Dec. 24, 2019, 10:02 p.m.