jensenSDiv: Compute Jensen-Shannon Divergence

View source: R/jensenSDiv.R

jensenSDivR Documentation

Compute Jensen-Shannon Divergence

Description

Compute Jensen-Shannon Divergence of probqbility vectors p and q.

Usage

jensenSDiv(p, q, Pi = 0.5, logbase = 2)

Arguments

p, q

Probability vectors, sum(p_i) = 1 and sum(q_i) = 1.

Pi

Weight of the probability distribution p. The weight for q is: 1 - Pi. Default Pi = 0.5.

logbase

A positive number: the base with respect to which logarithms

Details

The Jensen–Shannon divergence is a method of measuring the similarity between two probability distributions. Here, the generalization given in reference [1] is used. Jensen–Shannon divergence is expressed in terms of Shannon entroppy. 0 < jensenSDiv(p, q) < 1, provided that the base 2 logarithm is used in the estimation of the Shannon entropies involved.

Author(s)

Robersy Sanchez (https://github.com/genomaths).

References

1. J. Lin, “Divergence Measures Based on the Shannon Entropy,” IEEE Trans. Inform. Theory, vol. 37, no. 1, pp. 145–151, 1991.

Examples

set.seed(123)
counts = sample.int(10)
prob.p = counts/sum(counts)
counts = sample.int(12,10)
prob.q = counts/sum(counts)
jensenSDiv(prob.p, prob.q)

genomaths/MethylIT.utils documentation built on July 4, 2023, 12:05 a.m.