knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" )
If you test multiple hypotheses at once, then you risk getting false positives. One way of dealing with this problem is by adjusting the p-values of your results to account for the fact that you've run multiple hypotheses. This package takes a vector of p-values, and outputs a vector of ''q-values'', in the style of Anderson (2008). Rejecting q-values below a threshold, say alpha = 0.05, will control for the ''false-discovery rate'' (FDR). In other words, the probability that a given significant result is a false positive will then be equal to or less than 0.05.
You can install the development version from GitHub with:
# install.packages("devtools") devtools::install_github("dmbwebb/qval")
This is a basic example which shows you how to solve a common problem:
library(qval) set.seed(12345) pval_example <- c(runif(10, 0, 1), runif(2, 0, 0.05)) pval_example q_val(pval_example)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.