Computes severity at various discrepancies (from the null hypothesis) for the hypothesis test H_{0}: μ = μ_{0} vs H_{1}: μ > μ_{0}, where μ_{0} is the hypothesized value. Also plots both the severity curve(s) and the power curve on a single plot.
1 |
mu0 |
the hypothesized value μ_{0}, an integer strictly greater than zero. |
xbar |
a non-empty numeric vector containing up to 6 elements. Each element represents a sample mean \bar{x}, a real number in the closed interval [μ_{0}, μ_{0} + 1]. That is, each sample mean describes a different set of outcomes \mathbf{x}_{0} under consideration. |
sigma |
standard deviation σ, a scalar strictly greater than zero, which is assumed to be known in this case. |
n |
sample size n, an integer strictly greater than zero. |
alpha |
pre-data significance level α, a real number in the open interval (0, 1): the significance level determines the rejection region for the hypothesis test. |
Given mu0
and xbar
(see the “Arguments” section), as well as other inputs, this function separates the elements of xbar
into two categories: one for rejecting the null hypothesis (i.e., H_{0}: μ = μ_{0}); the other for accepting the null hypothesis. In other words, all the inferences that lead to acceptance of H_{0} are grouped together, as are the inferences that lead to rejection of H_{0}. However, if there are more than 3 elements of xbar
that belong to any of the two categories then only the first 3 elements are considered in each category.
In addition, the null hypothesis and the alternative hypothesis both follow the Normal (or Gaussian) distribution in this case.
*** The difference between this version and previous versions is that one more input is added for additional flexibility: the user is now able to control the hypothesized value of the unknown parameter μ. ***
This function also contributes as an introduction to the severity concept, for which the general inferential rationale is the following:
“
Severity rationale: Error probabilities may be used to make inferences about the process giving rise to data, by enabling the assessment of how well probed or how severely tested claims are, with data \mathbf{x}_{0}.
” (Mayo & Spanos 2006)
Note: Although the degree of severity with which a hypothesis H has passed a test is used to determine if it is warranted to infer H, the degree of severity is not assigned to H itself: “it is an attribute of the test procedure as a whole, including the inference under consideration” (Mayo & Spanos 2006).
An object of class list
, a list including the following elements:
accept |
a numeric binary vector, where each element takes one of two values (i.e., 0 or 1): 1 if the null hypothesis is accepted; 0 if the null hypothesis is rejected. |
p |
a numeric vector, with each element being the p-value corresponding to the appropriate element in |
severity_acceptH0 |
a numeric matrix, with each column containing severity calculations for each discrepancy in the vector |
severity_rejectH0 |
a numeric matrix, with each column containing severity calculations for each discrepancy in the vector |
power |
a numeric vector comprising the power function corresponding to each discrepancy in the vector |
discrepancy |
a numeric vector of discrepancies from μ_{0}. |
Nicole Mee-Hyaang Jinn
Mayo, Deborah G. 2012. “Statistical Science Meets Philosophy of Science Part 2: Shallow Versus Deep Explorations.” Rationality, Markets and Morals: Studies at the Intersection of Philosophy and Economics 3 (Special Topic: Statistical Science and Philosophy of Science) (September 26): 71-107. http://www.rmm-journal.com/downloads/Article_Mayo2.pdf.
Mayo, Deborah G., and David R. Cox. 2010. “Frequentist Statistics as a Theory of Inductive Inference.” In Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, edited by Deborah G. Mayo and Aris Spanos, 247-274. Cambridge: Cambridge University Press.
Mayo, Deborah G., and Aris Spanos. 2006. “Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction.” The British Journal for the Philosophy of Science 57 (2) (June 1): 323-357. doi:10.2307/3873470. http://www.jstor.org/stable/3873470.
Mayo, Deborah G., and Aris Spanos. 2011. “Error Statistics.” In Philosophy of Statistics, edited by Prasanta S. Bandyopadhyay and Malcom R. Forster, 7:153-198. Elsevier.
1 2 3 4 5 6 7 |
Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.