View source: R/distribution-methods.R
tfd_kl_divergence | R Documentation |
Denote this distribution by p and the other distribution by q.
Assuming p, q are absolutely continuous with respect to reference measure r,
the KL divergence is defined as:
KL[p, q] = E_p[log(p(X)/q(X))] = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) = H[p, q] - H[p]
where F denotes the support of the random variable X ~ p
, H[., .]
denotes (Shannon) cross entropy, and H[.]
denotes (Shannon) entropy.
tfd_kl_divergence(distribution, other, name = "kl_divergence")
distribution |
The distribution being used. |
other |
|
name |
String prepended to names of ops created by this function. |
self$dtype Tensor with shape [B1, ..., Bn]
representing n different calculations
of the Kullback-Leibler divergence.
Other distribution_methods:
tfd_cdf()
,
tfd_covariance()
,
tfd_cross_entropy()
,
tfd_entropy()
,
tfd_log_cdf()
,
tfd_log_prob()
,
tfd_log_survival_function()
,
tfd_mean()
,
tfd_mode()
,
tfd_prob()
,
tfd_quantile()
,
tfd_sample()
,
tfd_stddev()
,
tfd_survival_function()
,
tfd_variance()
d1 <- tfd_normal(loc = c(1, 2), scale = c(1, 0.5)) d2 <- tfd_normal(loc = c(1.5, 2), scale = c(1, 0.5)) d1 %>% tfd_kl_divergence(d2)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.