View source: R/update_belief.R
| update_belief | R Documentation |
Update the belief given a taken action and observation.
update_belief(
model,
belief = NULL,
action = NULL,
observation = NULL,
episode = 1,
digits = 7,
drop = TRUE
)
model |
a POMDP object. |
belief |
the current belief state. Defaults to the start belief state specified in the model or "uniform". |
action |
the taken action. Can also be a vector of multiple actions or, if missing, then all actions are evaluated. |
observation |
the received observation. Can also be a vector of multiple observations or, if missing, then all observations are evaluated. |
episode |
Use transition and observation matrices for the given episode for time-dependent POMDPs (see POMDP). |
digits |
round decimals. |
drop |
logical; drop the result to a vector if only a single belief state is returned. |
Update the belief state b (belief) with an action a and observation o using the update
b' \leftarrow \tau(b, a, o) defined so that
b'(s') = \eta O(o | s',a) \sum_{s \in S} T(s' | s,a) b(s)
where \eta = 1/ \sum_{s' \in S}[ O(o | s',a) \sum_{s \in S} T(s' | s,a) b(s)] normalizes the new belief state so the probabilities add up to one.
returns the updated belief state as a named vector.
If action or observations is a vector with multiple elements ot missing, then a matrix with all
resulting belief states is returned.
Michael Hahsler
Other POMDP:
MDP2POMDP,
POMDP(),
accessors,
actions(),
add_policy(),
plot_belief_space(),
projection(),
reachable_and_absorbing,
regret(),
sample_belief_space(),
simulate_POMDP(),
solve_POMDP(),
solve_SARSOP(),
transition_graph(),
value_function(),
write_POMDP()
data(Tiger)
update_belief(c(.5,.5), model = Tiger)
update_belief(c(.5,.5), action = "listen", observation = "tiger-left", model = Tiger)
update_belief(c(.15,.85), action = "listen", observation = "tiger-right", model = Tiger)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.