Description Usage Arguments Value
The KL divergence of mod2 from mod1 is the cost (in bits) of encoding data
drawn from the distribution of the true model (mod1
) using a code that
is optimized for another model's distribution (mod2
). That is, it
measures how much it hurts to think that data is coming from mod2
when
it's actually generated from mod1
.
1 | KL_mods(mod1, mod2)
|
mod1 |
true model (list with fields mu and Sigma) |
mod2 |
other model |
KL divergence of mod2 (candidate) from mod1 (true model), in bits.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.