tfd_kl_divergence: Computes the Kullback-Leibler divergence.

View source: R/distribution-methods.R

tfd_kl_divergenceR Documentation

Computes the Kullback–Leibler divergence.

Description

Denote this distribution by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))] = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) = H[p, q] - H[p] where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Usage

tfd_kl_divergence(distribution, other, name = "kl_divergence")

Arguments

distribution

The distribution being used.

other

tfp$distributions$Distribution instance.

name

String prepended to names of ops created by this function.

Value

self$dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

See Also

Other distribution_methods: tfd_cdf(), tfd_covariance(), tfd_cross_entropy(), tfd_entropy(), tfd_log_cdf(), tfd_log_prob(), tfd_log_survival_function(), tfd_mean(), tfd_mode(), tfd_prob(), tfd_quantile(), tfd_sample(), tfd_stddev(), tfd_survival_function(), tfd_variance()

Examples


  d1 <- tfd_normal(loc = c(1, 2), scale = c(1, 0.5))
  d2 <- tfd_normal(loc = c(1.5, 2), scale = c(1, 0.5))
  d1 %>% tfd_kl_divergence(d2)


tfprobability documentation built on Sept. 1, 2022, 5:07 p.m.