View source: R/distributions.R
tfd_linear_gaussian_state_space_model | R Documentation |
The state space model, sometimes called a Kalman filter, posits a
latent state vector z_t
of dimension latent_size
that evolves
over time following linear Gaussian transitions,
z_{t+1} = F * z_t + N(b; Q)
for transition matrix F
, bias b
and covariance matrix
Q
. At each timestep, we observe a noisy projection of the
latent state x_t = H * z_t + N(c; R)
. The transition and
observation models may be fixed or may vary between timesteps.
tfd_linear_gaussian_state_space_model( num_timesteps, transition_matrix, transition_noise, observation_matrix, observation_noise, initial_state_prior, initial_step = 0L, validate_args = FALSE, allow_nan_stats = TRUE, name = "LinearGaussianStateSpaceModel" )
num_timesteps |
Integer |
transition_matrix |
A transition operator, represented by a Tensor or
LinearOperator of shape |
transition_noise |
An instance of
|
observation_matrix |
An observation operator, represented by a Tensor
or LinearOperator of shape |
observation_noise |
An instance of |
initial_state_prior |
An instance of |
initial_step |
optional |
validate_args |
Logical, default FALSE. When TRUE distribution parameters are checked for validity despite possibly degrading runtime performance. When FALSE invalid inputs may silently render incorrect outputs. Default value: FALSE. |
allow_nan_stats |
Logical, default TRUE. When TRUE, statistics (e.g., mean, mode, variance) use the value NaN to indicate the result is undefined. When FALSE, an exception is raised if one or more of the statistic's batch members are undefined. |
name |
name prefixed to Ops created by this class. |
This Distribution represents the marginal distribution on
observations, p(x)
. The marginal log_prob
is computed by
Kalman filtering, and sample
by an efficient forward
recursion. Both operations require time linear in T
, the total
number of timesteps.
Shapes
The event shape is [num_timesteps, observation_size]
, where
observation_size
is the dimension of each observation x_t
.
The observation and transition models must return consistent
shapes.
This implementation supports vectorized computation over a batch of
models. All of the parameters (prior distribution, transition and
observation operators and noise models) must have a consistent
batch shape.
Time-varying processes
Any of the model-defining parameters (prior distribution, transition
and observation operators and noise models) may be specified as a
callable taking an integer timestep t
and returning a
time-dependent value. The dimensionality (latent_size
and
observation_size
) must be the same at all timesteps.
Importantly, the timestep is passed as a Tensor
, not a Python
integer, so any conditional behavior must occur inside the
TensorFlow graph. For example, suppose we want to use a different
transition model on even days than odd days. It does not work to
write
transition_matrix <- function(t) { if(t %% 2 == 0) even_day_matrix else odd_day_matrix }
since the value of t
is not fixed at graph-construction
time. Instead we need to write
transition_matrix <- function(t) { tf$cond(tf$equal(tf$mod(t, 2), 0), function() even_day_matrix, function() odd_day_matrix) }
so that TensorFlow can switch between operators appropriately at runtime.
a distribution instance.
For usage examples see e.g. tfd_sample()
, tfd_log_prob()
, tfd_mean()
.
Other distributions:
tfd_autoregressive()
,
tfd_batch_reshape()
,
tfd_bates()
,
tfd_bernoulli()
,
tfd_beta_binomial()
,
tfd_beta()
,
tfd_binomial()
,
tfd_categorical()
,
tfd_cauchy()
,
tfd_chi2()
,
tfd_chi()
,
tfd_cholesky_lkj()
,
tfd_continuous_bernoulli()
,
tfd_deterministic()
,
tfd_dirichlet_multinomial()
,
tfd_dirichlet()
,
tfd_empirical()
,
tfd_exp_gamma()
,
tfd_exp_inverse_gamma()
,
tfd_exponential()
,
tfd_gamma_gamma()
,
tfd_gamma()
,
tfd_gaussian_process_regression_model()
,
tfd_gaussian_process()
,
tfd_generalized_normal()
,
tfd_geometric()
,
tfd_gumbel()
,
tfd_half_cauchy()
,
tfd_half_normal()
,
tfd_hidden_markov_model()
,
tfd_horseshoe()
,
tfd_independent()
,
tfd_inverse_gamma()
,
tfd_inverse_gaussian()
,
tfd_johnson_s_u()
,
tfd_joint_distribution_named_auto_batched()
,
tfd_joint_distribution_named()
,
tfd_joint_distribution_sequential_auto_batched()
,
tfd_joint_distribution_sequential()
,
tfd_kumaraswamy()
,
tfd_laplace()
,
tfd_lkj()
,
tfd_log_logistic()
,
tfd_log_normal()
,
tfd_logistic()
,
tfd_mixture_same_family()
,
tfd_mixture()
,
tfd_multinomial()
,
tfd_multivariate_normal_diag_plus_low_rank()
,
tfd_multivariate_normal_diag()
,
tfd_multivariate_normal_full_covariance()
,
tfd_multivariate_normal_linear_operator()
,
tfd_multivariate_normal_tri_l()
,
tfd_multivariate_student_t_linear_operator()
,
tfd_negative_binomial()
,
tfd_normal()
,
tfd_one_hot_categorical()
,
tfd_pareto()
,
tfd_pixel_cnn()
,
tfd_poisson_log_normal_quadrature_compound()
,
tfd_poisson()
,
tfd_power_spherical()
,
tfd_probit_bernoulli()
,
tfd_quantized()
,
tfd_relaxed_bernoulli()
,
tfd_relaxed_one_hot_categorical()
,
tfd_sample_distribution()
,
tfd_sinh_arcsinh()
,
tfd_skellam()
,
tfd_spherical_uniform()
,
tfd_student_t_process()
,
tfd_student_t()
,
tfd_transformed_distribution()
,
tfd_triangular()
,
tfd_truncated_cauchy()
,
tfd_truncated_normal()
,
tfd_uniform()
,
tfd_variational_gaussian_process()
,
tfd_vector_diffeomixture()
,
tfd_vector_exponential_diag()
,
tfd_vector_exponential_linear_operator()
,
tfd_vector_laplace_diag()
,
tfd_vector_laplace_linear_operator()
,
tfd_vector_sinh_arcsinh_diag()
,
tfd_von_mises_fisher()
,
tfd_von_mises()
,
tfd_weibull()
,
tfd_wishart_linear_operator()
,
tfd_wishart_tri_l()
,
tfd_wishart()
,
tfd_zipf()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.