Description Usage Arguments Details Value References See Also Examples
View source: R/discrete_entropy.R
Computes the Shannon entropy \mathcal{H}(p) = -∑_{i=1}^{n} p_i \log p_i of a discrete RV X taking values in \lbrace x_1, …, x_n \rbrace with probability mass function (pmf) P(X = x_i) = p_i with p_i ≥q 0 for all i and ∑_{i=1}^{n} p_i = 1.
1 2 3 4 5 6 7 8 | discrete_entropy(
probs,
base = 2,
method = c("MLE"),
threshold = 0,
prior.probs = NULL,
prior.weight = 0
)
|
probs |
numeric; probabilities (empirical frequencies). Must be non-negative and add up to 1. |
base |
logarithm base; entropy is measured in “nats” for
|
method |
string; method to estimate entropy; see Details below. |
threshold |
numeric; frequencies below |
prior.probs |
optional; only used if |
prior.weight |
numeric; how much weight does the prior distribution get in a mixture
model between data and prior distribution? Must be between |
discrete_entropy
uses a plug-in estimator (method = "MLE"
):
\widehat{\mathcal{H}}(p) = - ∑_{i=1}^{n} \widehat{p}_i \log \widehat{p}_i.
If prior.weight > 0
, then it mixes the observed proportions \widehat{p}_i
with a prior distribution
\widehat{p}_i ≤ftarrow (1-λ) \cdot \widehat{p_i} + λ \cdot prior_i, \quad i=1, …, n,
where λ \in [0, 1] is the prior.weight
parameter. By default
the prior is a uniform distribution, i.e., prior_i = \frac{1}{n} for all i.
Note that this plugin estimator is biased. See References for an overview of alternative methods.
numeric; non-negative real value.
Archer E., Park I. M., Pillow J.W. (2014). “Bayesian Entropy Estimation for Countable Discrete Distributions”. Journal of Machine Learning Research (JMLR) 15, 2833-2868. Available at http://jmlr.org/papers/v15/archer14a.html.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | probs.tmp <- rexp(5)
probs.tmp <- sort(probs.tmp / sum(probs.tmp))
unif.distr <- rep(1/length(probs.tmp), length(probs.tmp))
matplot(cbind(probs.tmp, unif.distr), pch = 19,
ylab = "P(X = k)", xlab = "k")
matlines(cbind(probs.tmp, unif.distr))
legend("topleft", c("non-uniform", "uniform"), pch = 19,
lty = 1:2, col = 1:2, box.lty = 0)
discrete_entropy(probs.tmp)
# uniform has largest entropy among all bounded discrete pmfs
# (here = log(5))
discrete_entropy(unif.distr)
# no uncertainty if one element occurs with probability 1
discrete_entropy(c(1, 0, 0))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.