View source: R/sample_belief_space.R
sample_belief_space | R Documentation |
Sample points from belief space using a several sampling strategies.
sample_belief_space(model, projection = NULL, n = 1000, method = "random", ...)
model |
a unsolved or solved POMDP. |
projection |
Sample in a projected belief space. See |
n |
size of the sample. For trajectories, it is the number of trajectories. |
method |
character string specifying the sampling strategy. Available
are |
... |
for the trajectory method, further arguments are passed on to |
The purpose of sampling from the belief space is to provide good coverage or to sample belief points that are more likely to be encountered (see trajectory method). The following sampling methods are available:
'random'
samples uniformly sample from the projected belief space using
the method described by Luc Devroye (1986). Sampling is be done in parallel
after a foreach backend is registered.
'regular'
samples points using a
regularly spaced grid. This method is only available for projections on 2 or
3 states.
"trajectories"
returns the belief states encountered in n
trajectories of length horizon
starting at the
model's initial belief. Thus it returns n
x horizon
belief states and will contain duplicates.
Projection is not supported for trajectories. Additional
arguments can include the simulation horizon
and the start belief
which are passed on to simulate_POMDP()
.
Returns a matrix. Each row is a sample from the belief space.
Michael Hahsler
Luc Devroye, Non-Uniform Random Variate Generation, Springer Verlag, 1986.
Other POMDP:
MDP2POMDP
,
POMDP()
,
accessors
,
actions()
,
add_policy()
,
plot_belief_space()
,
projection()
,
reachable_and_absorbing
,
regret()
,
simulate_POMDP()
,
solve_POMDP()
,
solve_SARSOP()
,
transition_graph()
,
update_belief()
,
value_function()
,
write_POMDP()
data("Tiger")
# random sampling can be done in parallel after registering a backend.
# doparallel::registerDoParallel()
sample_belief_space(Tiger, n = 5)
sample_belief_space(Tiger, n = 5, method = "regular")
sample_belief_space(Tiger, n = 1, horizon = 5, method = "trajectories")
# sample, determine the optimal action and calculate the expected reward for a solved POMDP
# Note: check.names = FALSE is used to preserve the `-` for the state names in the dataframe.
sol <- solve_POMDP(Tiger)
samp <- sample_belief_space(sol, n = 5, method = "regular")
data.frame(samp, action = optimal_action(sol, belief = samp),
reward = reward(sol, belief = samp), check.names = FALSE)
# sample from a 3 state problem
data(Three_doors)
Three_doors
sample_belief_space(Three_doors, n = 5)
sample_belief_space(Three_doors, n = 5, projection = c(`tiger-left` = .1))
if ("Ternary" %in% installed.packages()) {
sample_belief_space(Three_doors, n = 9, method = "regular")
sample_belief_space(Three_doors, n = 9, method = "regular", projection = c(`tiger-left` = .1))
}
sample_belief_space(Three_doors, n = 1, horizon = 5, method = "trajectories")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.