KL_mods: Compute KL divergence of one model from a second

Description Usage Arguments Value

View source: R/kl.R

Description

The KL divergence of mod2 from mod1 is the cost (in bits) of encoding data drawn from the distribution of the true model (mod1) using a code that is optimized for another model's distribution (mod2). That is, it measures how much it hurts to think that data is coming from mod2 when it's actually generated from mod1.

Usage

1
KL_mods(mod1, mod2)

Arguments

mod1

true model (list with fields mu and Sigma)

mod2

other model

Value

KL divergence of mod2 (candidate) from mod1 (true model), in bits.


kleinschmidt/phondisttools documentation built on May 20, 2019, 5:57 p.m.