continuous_entropy: Shannon entropy for a continuous pdf

Description Usage Arguments Details Value See Also Examples

View source: R/continuous_entropy.R

Description

Computes the Shannon entropy \mathcal{H}(p) for a continuous probability density function (pdf) p(x) using numerical integration.

Usage

1
continuous_entropy(pdf, lower, upper, base = 2)

Arguments

pdf

R function for the pdf p(x) of a RV X \sim p(x). This function must be non-negative and integrate to 1 over the interval [lower, upper].

lower, upper

lower and upper integration limit. pdf must integrate to 1 on this interval.

base

logarithm base; entropy is measured in “nats” for base = exp(1); in “bits” if base = 2 (default).

Details

The Shannon entropy of a continuous random variable (RV) X \sim p(x) is defined as

\mathcal{H}(p) = -\int_{-∞}^{∞} p(x) \log p(x) d x.

Contrary to discrete RVs, continuous RVs can have negative entropy (see Examples).

Value

scalar; entropy value (real).

Since continuous_entropy uses numerical integration (integrate()) convergence is not garantueed (even if integral in definition of \mathcal{H}(p) exists). Issues a warning if integrate() does not converge.

See Also

discrete_entropy

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# entropy of U(a, b) = log(b - a). Thus not necessarily positive anymore, e.g.
continuous_entropy(function(x) dunif(x, 0, 0.5), 0, 0.5) # log2(0.5)

# Same, but for U(-1, 1)
my_density <- function(x){
  dunif(x, -1, 1)
}
continuous_entropy(my_density, -1, 1) # = log(upper - lower)

# a 'triangle' distribution
continuous_entropy(function(x) x, 0, sqrt(2))

ForeCA documentation built on July 1, 2020, 7:50 p.m.