Man pages for jdtrat/dynaq
Tools to Simulate DynaQ Reinforcement Learning Algorithms

basicVisualizationGet Basic Visualizations for Simulations
choiceChoose an Action
generateDataGenerate Task Simulations
getFirstTransitionGet Transition from First to Second State
getLogFitGet the Logistic Regression Fit
getLogPredsGet the Logistic Regression Predictions
getQGraphicValuesGet Q Values for Plotting
getRewardCheck for Reward Outcome
getSecondTransitionGet Transition from Second to Third State
interleaveInterleave two dataframes
logSetupManipulate the Data for Logistic Regression
manipulateDataManipulate Data
oneTrialPerform One Model-Free Trial
processSimDataProcess Simulated Data
randomChoiceCreate a random choice
randomPreviousGet a Random Value from an Existing Dataframe
randRewardProbRandom Reward Probability
removeTableRemove Qtable From Data
setupQtableInitialize Q-value Information
setupRewardsSetup Rewards
setupTransFunctionSetup Transition Model
simModelPerform One Model-Based Trial
simTransitionPlotPlot the Transition Model's Estimates Over Time
smaxActionAction according to Softmax
stayDistributionPlotPlot the Distribution of Stay Probabilities for Multiple...
stayProbPlotPlot the Stay Probability
summaryTableCreate a Summary Table of the Stay Probabilities
updateQRLUpdate QR and QL
updateQsecondStateUpdate Q-values at Second State
updateQtableUpdate Q Table
updateQthirdStateUpdate Q-values at Third State
updateRewardProbUpdate Reward Probabilities
updateTransFunctionUpdate Transition Model
jdtrat/dynaq documentation built on July 24, 2020, 7:18 a.m.