jensenSDiv | R Documentation |
Compute Jensen-Shannon Divergence of probqbility vectors p and q.
jensenSDiv(p, q, Pi = 0.5, logbase = 2)
p, q |
Probability vectors, sum(p_i) = 1 and sum(q_i) = 1. |
Pi |
Weight of the probability distribution p. The weight for q is: 1 - Pi. Default Pi = 0.5. |
logbase |
A positive number: the base with respect to which logarithms |
The Jensen–Shannon divergence is a method of measuring the similarity between two probability distributions. Here, the generalization given in reference [1] is used. Jensen–Shannon divergence is expressed in terms of Shannon entroppy. 0 < jensenSDiv(p, q) < 1, provided that the base 2 logarithm is used in the estimation of the Shannon entropies involved.
Robersy Sanchez (https://github.com/genomaths).
1. J. Lin, “Divergence Measures Based on the Shannon Entropy,” IEEE Trans. Inform. Theory, vol. 37, no. 1, pp. 145–151, 1991.
set.seed(123)
counts = sample.int(10)
prob.p = counts/sum(counts)
counts = sample.int(12,10)
prob.q = counts/sum(counts)
jensenSDiv(prob.p, prob.q)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.