MDP2POMDP | R Documentation |
Convert a MDP into POMDP by adding an observation model or a POMDP into a MDP by making the states observable.
make_partially_observable(x, observations = NULL, observation_prob = NULL)
make_fully_observable(x)
x |
a |
observations |
a character vector specifying the names of the available observations. |
observation_prob |
Specifies the observation probabilities (see POMDP for details). |
make_partially_observable()
adds an observation model to an MDP. If no observations and
observation probabilities are provided, then an observation for each state is created
with identity observation matrices. This means we have a fully observable model
encoded as a POMDP.
make_fully_observable()
removes the observation model from a POMDP and returns
an MDP.
a MDP
or a POMDP
object.
Michael Hahsler
Other MDP:
MDP()
,
MDP_policy_functions
,
accessors
,
actions()
,
add_policy()
,
gridworld
,
reachable_and_absorbing
,
regret()
,
simulate_MDP()
,
solve_MDP()
,
transition_graph()
,
value_function()
Other POMDP:
POMDP()
,
accessors
,
actions()
,
add_policy()
,
plot_belief_space()
,
projection()
,
reachable_and_absorbing
,
regret()
,
sample_belief_space()
,
simulate_POMDP()
,
solve_POMDP()
,
solve_SARSOP()
,
transition_graph()
,
update_belief()
,
value_function()
,
write_POMDP()
# Turn the Maze MDP into a partially observable problem.
# Here each state has an observation, so it is still a fully observable problem
# encoded as a POMDP.
data("Maze")
Maze
Maze_POMDP <- make_partially_observable(Maze)
Maze_POMDP
sol <- solve_POMDP(Maze_POMDP)
policy(sol)
simulate_POMDP(sol, n = 1, horizon = 100, return_trajectories = TRUE)$trajectories
# Make the Tiger POMDP fully observable
data("Tiger")
Tiger
Tiger_MDP <- make_fully_observable(Tiger)
Tiger_MDP
sol <- solve_MDP(Tiger_MDP)
policy(sol)
# The result is not exciting since we can observe where the tiger is!
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.