WARNING: This package is not maintained anymore!
# Install from CRAN.
install.packages("reinforcelearn")
# Install development version from github.
devtools::install_github("markusdumke/reinforcelearn")
Reinforcement Learning with the package reinforcelearn
is as easy as
library(reinforcelearn)
env = makeEnvironment("windy.gridworld")
agent = makeAgent("softmax", "table", "qlearning")
# Run interaction for 10 episodes.
interact(env, agent, n.episodes = 10L)
#> $returns
#> [1] -3244 -2335 -1734 -169 -879 -798 -216 -176 -699 -232
#>
#> $steps
#> [1] 3244 2335 1734 169 879 798 216 176 699 232
With makeEnvironment
you can create reinforcement learning environments.
# Create environment.
step = function(self, action) {
state = list(mean = action + rnorm(1), sd = runif(1))
reward = rnorm(1, state[[1]], state[[2]])
done = FALSE
list(state, reward, done)
}
reset = function(self) {
state = list(mean = 0, sd = 1)
state
}
env = makeEnvironment("custom", step = step, reset = reset)
The environment is an R6
class with a set of attributes and methods. You can interact with the environment via the reset
and step
method.
# Reset environment.
env$reset()
#> $mean
#> [1] 0
#>
#> $sd
#> [1] 1
# Take action.
env$step(100)
#> $state
#> $state$mean
#> [1] 99.56104
#>
#> $state$sd
#> [1] 0.5495179
#>
#>
#> $reward
#> [1] 99.40968
#>
#> $done
#> [1] FALSE
There are some predefined environment classes, e.g. MDPEnvironment
, which allows you to create a Markov Decision Process by passing on state transition array and reward matrix, or GymEnvironment
, where you can use toy problems from OpenAI Gym.
# Create a gym environment.
# Make sure you have Python, gym and reticulate installed.
env = makeEnvironment("gym", gym.name = "MountainCar-v0")
# Take random actions for 200 steps.
env$reset()
for (i in 1:200) {
action = sample(0:2, 1)
env$step(action)
env$visualize()
}
env$close()
This should open a window showing a graphical visualization of the environment during interaction.
For more details on how to create an environment have a look at the vignette: Environments
With makeAgent
you can set up a reinforcement learning agent to solve the environment, i.e. to find the best action in each time step.
The first step is to set up the policy, which defines which action to choose. For example we could use a uniform random policy.
# Create the environment.
env = makeEnvironment("windy.gridworld")
# Create agent with uniform random policy.
policy = makePolicy("random")
agent = makeAgent(policy)
# Run interaction for 10 steps.
interact(env, agent, n.steps = 10L)
#> $returns
#> numeric(0)
#>
#> $steps
#> integer(0)
In this scenario the agent chooses all actions with equal probability and will not learn anything from the interaction. Usually we want the agent to be able to learn something. Value-based algorithms learn a value function from interaction with the environment and adjust the policy according to the value function. For example we could set up Q-Learning with a softmax policy.
# Create the environment.
env = makeEnvironment("windy.gridworld")
# Create qlearning agent with softmax policy and tabular value function.
policy = makePolicy("softmax")
values = makeValueFunction("table", n.states = env$n.states, n.actions = env$n.actions)
algorithm = makeAlgorithm("qlearning")
agent = makeAgent(policy, values, algorithm)
# Run interaction for 10 steps.
interact(env, agent, n.episodes = 10L)
#> $returns
#> [1] -1524 -3496 -621 -374 -173 -1424 -1742 -468 -184 -39
#>
#> $steps
#> [1] 1524 3496 621 374 173 1424 1742 468 184 39
Also have a look at the vignettes for further examples.
Logo is a modification of https://www.r-project.org/logo/.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.