parEntropyEstimate: Parallel Entropy Estimation

Description Usage Arguments Details Value Author(s) References See Also

View source: R/parEntropyEstimate.R


A function that computes the entropy between all pairs of rows (or specified ones) of matrix counts using the indirect methods.


parEntropyEstimate(idx, method = method, unit = unit,
                  priorHyperParam = priorHyperParam,
                   shrinkageTarget = shrinkageTarget, boot = boot)



the index of the cell which corresponds to the interaction going to be esimated.


a character string indicating which estimate is to be computed. One of "ML" (Maximum Likelihood Estimator, default), "MM" (Miller-Madow corrected Estimator), "Bayes" (Bayesian Estimators), "CS" (Chao-Shen Estimator), "Shrink" (James-Stein shrinkage Estimator), "KD" (kernel Density Estimator), or "KNN" (k-Nearest Neighbor Estimator), can be abbreviated. For the "Bayes" estimate it is needed to specify also which priorHyperParam is to be used; for "Shrink" is optional to specify values for the shrinkageTarget parameter; for "KNN" is needed to specify also the number of nearest neighbors k.


the unit in which mutual information is measured. One of "bit" (log2, default), "ban" (log10) or "nat" (natural units).


the prior distribution type for the Bayes estimation. One of "Jeffreys" (default, Jeffreys Prior, Krichevsky and Trofimov Estimator), "BLUnif" (Bayes-Laplace uniform Prior, Holste Estimator), "Perks" (Perks Prior, Schurmann and Grassberger Estimator), or "MiniMax" (MiniMax Prior), can be abbreviated.


shrinkage target frequencies. If not specified (default) it is estimated in a James-Stein-type fashion (uniform distribution).


logical (FALSE as default). Used for calculating a null distribution in order to evaluate if such a interaction is true or obtained by chance.


Internal of parMIEstimate.


The parEntropyEstimate function returns the value of the entropy of that pair of genes H(X,Y).


Luciano Garofano [email protected], Stefano Maria Pagnotta, Michele Ceccarelli


Paniski L. (2003). Estimation of Entropy and Mutual Information. Neural Computation, vol. 15 no. 6 pp. 1191-1253.

Meyer P.E., Laffitte F., Bontempi G. (2008). minet: A R/Bioconductor Package for Inferring Large Transcriptional Networks Using Mutual Information. BMC Bioinformatics 9:461.

Antos A., Kontoyiannis I. (2001). Convergence properties of functional estimates for discrete distributions. Random Structures and Algorithms, vol. 19 pp. 163-193.

Strong S., Koberle R., de Ruyter van Steveninck R.R., Bialek W. (1998). Entropy and Information in Neural Spike Trains. Physical Review Letters, vol. 80 pp. 197-202.

Miller G.A. (1955). Note on the bias of information estimates. Information Theory in Psychology, II-B pp. 95-100.

Jeffreys H. (1946). An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London, vol. 186 no. 1007 pp. 453-461.

Krichevsky R.E., Trofimov V.K. (1981). The performance of universal encoding. IEEE Transactions on Information Theory, vol. 27 pp. 199-207.

Holste D., Hertzel H. (1998). Bayes' estimators of generalized entropies. Journal of Physics A, vol. 31 pp. 2551-2566.

Perks W. (1947). Some observations on inverse probability including a new indifference rule. Journal of the Institute of Actuaries, vol. 73 pp. 285-334.

Schurmann T., Grassberg P. (1996). Entropy estimation of symbol sequences. Chaos, vol. 6 pp. 414-427.

Trybula S. (1958). Some problems of simultaneous minimax estimation. The Annals of Mathematical Statistics, vol. 29 pp. 245-253.

Chao A., Shen T.J. (2003). Nonparametric estimation of Shannon's index diversity when there are unseen species. Environmental and Ecological Statistics, vol. 10 pp. 429-443.

James W., Stein C. (1961). Estimation with Quadratic Loss. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1 pp. 361-379.

See Also

parMIEstimate, parMIKD

lucgar/synRNASeqNet documentation built on May 21, 2017, 3:39 p.m.