PID.measure: A comprehensive function for evaluating the partial...

Description Usage Arguments Details Value References Examples

View source: R/information.measure.R

Description

The PID.measure function is used to decompose two source information acting on the common target into four parts: joint information (synergy), unique information from source x, unique information from source y and shared information (redundancy). The input of the PID.measure is the joint count table.

Usage

1
2
3
4
5
6
7
PID.measure(
  XYZ,
  method = c("ML", "Jeffreys", "Laplace", "SG", "minimax", "shrink"),
  lambda.probs,
  unit = c("log", "log2", "log10"),
  verbose = TRUE
)

Arguments

XYZ

a joint count distribution table of three random variables

method

six probability estimation algorithms are available, "ML" is the default.

lambda.probs

the shrinkage intensity, only called when the probability estimator is "shrink".

unit

the base of the logarithm. The default is natural logarithm, which is "log". For evaluating entropy in bits, it is suggested to set the unit to "log2".

verbose

a logic variable. if verbose is true, report the shrinkage intensity.

Details

Six probability estimation methods are available to evaluate the underlying bin probability from observed counts:
method = "ML": maximum likelihood estimator, also referred to empirical probability,
method = "Jeffreys": Dirichlet distribution estimator with prior a = 0.5,
method = "Laplace": Dirichlet distribution estimator with prior a = 1,
method = "SG": Dirichlet distribution estimator with prior a = 1/length(XY),
method = "minimax": Dirichlet distribution estimator with prior a = sqrt(sum(XY))/length(XY),
method = "shrink": shrinkage estimator.

Value

PID.measure returns a list that includes synergistic information, unique information from x, unique information from y, redundant information and the sum of the four parts of information.

References

Hausser, J., & Strimmer, K. (2009). Entropy Inference and the James-Stein Estimator, with Application to Nonlinear Gene Association Networks. Journal of Machine Learning Research, 1469-1484.

Williams, P. L., & Beer, R. D. (2010). Nonnegative Decomposition of Multivariate Information. arXiv: Information Theory.

Chan, T. E., Stumpf, M. P., & Babtie, A. C. (2017). Gene Regulatory Network Inference from Single-Cell Data Using Multivariate Information Measures. Cell Systems, 5(3).

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# three numeric vectors corresponding to three continuous random variables
x <- c(0.0, 0.2, 0.2, 0.7, 0.9, 0.9, 0.9, 0.9, 1.0)
y <- c(1.0, 2.0,  12, 8.0, 1.0, 9.0, 0.0, 3.0, 9.0)
z <- c(3.0, 7.0, 2.0,  11,  10,  10,  14, 2.0,  11)

# corresponding joint count table estimated by "uniform width" algorithm
XYZ <- discretize3D(x, y, z, "uniform_width")

# corresponding partial information decomposition
PID.measure(XYZ)

# corresponding count table estimated by "uniform frequency" algorithm
XYZ <- discretize3D(x, y, z, "uniform_frequency")

# corresponding partial information decomposition
PID.measure(XYZ)

Informeasure documentation built on Nov. 8, 2020, 7:20 p.m.