Description Usage Arguments Details Value Note Author(s) References See Also Examples
Compute the value of K (the multiplier of estimated standard deviation) used
to construct a prediction interval for the next k observations or next set of
k means based on data from a normal distribution.
The function
predIntNormK
is called by predIntNorm
.
1 2 3  predIntNormK(n, df = n  1, n.mean = 1, k = 1,
method = "Bonferroni", pi.type = "twosided",
conf.level = 0.95)

n 
a positive integer greater than 2 indicating the sample size upon which the prediction interval is based. 
df 
the degrees of freedom associated with the prediction interval. The default is

n.mean 
positive integer specifying the sample size associated with the k future averages.
The default value is 
k 
positive integer specifying the number of future observations or averages the
prediction interval should contain with confidence level 
method 
character string specifying the method to use if the number of future observations
( 
pi.type 
character string indicating what kind of prediction interval to compute.
The possible values are 
conf.level 
a scalar between 0 and 1 indicating the confidence level of the prediction interval.
The default value is 
A prediction interval for some population is an interval on the real line constructed so that it will contain k future observations or averages from that population with some specified probability (1α)100\%, where 0 < α < 1 and k is some prespecified positive integer. The quantity (1α)100\% is called the confidence coefficient or confidence level associated with the prediction interval.
Let \underline{x} = x_1, x_2, …, x_n denote a vector of n
observations from a normal distribution with parameters
mean=
μ and sd=
σ. Also, let m denote the
sample size associated with the k future averages (i.e., n.mean=
m).
When m=1, each average is really just a single observation, so in the rest of
this help file the term “averages” will replace the phrase
“observations or averages”.
For a normal distribution, the form of a twosided (1α)100\% prediction interval is:
[\bar{x}  Ks, \bar{x} + Ks] \;\;\;\;\;\; (1)
where \bar{x} denotes the sample mean:
\bar{x} = \frac{1}{n} ∑_{i=1}^n x_i \;\;\;\;\;\; (2)
s denotes the sample standard deviation:
s^2 = \frac{1}{n1} ∑_{i=1}^n (x_i  \bar{x})^2 \;\;\;\;\;\; (3)
and K denotes a constant that depends on the sample size n, the
confidence level, the number of future averages k, and the
sample size associated with the future averages, m. Do not confuse the
constant K (uppercase K) with the number of future averages k
(lowercase k). The symbol K is used here to be consistent with the
notation used for tolerance intervals (see tolIntNorm
).
Similarly, the form of a onesided lower prediction interval is:
[\bar{x}  Ks, ∞] \;\;\;\;\;\; (4)
and the form of a onesided upper prediction interval is:
[∞, \bar{x} + Ks] \;\;\;\;\;\; (5)
but K differs for onesided versus twosided prediction intervals.
The derivation of the constant K is explained below. The function
predIntNormK
computes the value of K and is called by
predIntNorm
.
The Derivation of K for One Future Observation or Average (k = 1)
Let X denote a random variable from a normal distribution
with parameters mean=
μ and sd=
σ, and let
x_p denote the p'th quantile of X.
A true twosided (1α)100\% prediction interval for the next k=1 observation of X is given by:
[x_{α/2}, x_{1α/2}] = [μ  z_{1α/2}σ, μ + z_{1α/2}σ] \;\;\;\;\;\; (6)
where z_p denotes the p'th quantile of a standard normal distribution.
More generally, a true twosided (1α)100\% prediction interval for the next k=1 average based on a sample of size m is given by:
[μ  z_{1α/2}\frac{σ}{√{m}}, μ + z_{1α/2}\frac{σ}{√{m}}] \;\;\;\;\;\; (7)
Because the values of μ and σ are unknown, they must be estimated, and a prediction interval then constructed based on the estimated values of μ and σ.
For a twosided prediction interval (pi.type="twosided"
),
the constant K for a (1α)100\% prediction interval for the next
k=1 average based on a sample size of m is computed as:
K = t_{n1, 1α/2} √{\frac{1}{m} + \frac{1}{n}} \;\;\;\;\;\; (8)
where t_{ν, p} denotes the p'th quantile of the
Student's tdistribution with ν
degrees of freedom. For a onesided prediction interval
(pi.type="lower"
or pi.type="lower"
), the prediction interval
is given by:
K = t_{n1, 1α} √{\frac{1}{m} + \frac{1}{n}} \;\;\;\;\;\; (9)
.
The formulas for these prediction intervals are derived as follows. Let \bar{y} denote the future average based on m observations. Then the quantity \bar{y}  \bar{x} has a normal distribution with expectation and variance given by:
E(\bar{y}  \bar{x}) = 0 \;\;\;\;\;\; (10)
Var(\bar{y}  \bar{x}) = Var(\bar{y}) + Var(\bar{x}) = \frac{σ^2}{m} + \frac{σ^2}{n} = σ^2(\frac{1}{m} + \frac{1}{n}) \;\;\;\;\;\; (11)
so the quantity
t = \frac{\bar{y}  \bar{x}}{s√{\frac{1}{m} + \frac{1}{n}}} \;\;\;\;\;\; (12)
has a Student's tdistribution with n1 degrees of freedom.
The Derivation of K for More than One Future Observation or Average (k >1)
When k > 1, the function predIntNormK
allows for two ways to compute
K: an exact method due to Dunnett (1955) (method="exact"
), and
an approximate (conservative) method based on the Bonferroni inequality
(method="Bonferroni"
; see Miller, 1981a, pp.8, 6770;
Gibbons et al., 2009, p.4). Each of these methods is explained below.
Exact Method Due to Dunnett (1955) (method="exact"
)
Dunnett (1955) derived the value of K in the context of the multiple
comparisons problem of comparing several treatment means to one control mean.
The value of K is computed as:
K = c √{\frac{1}{m} + \frac{1}{n}} \;\;\;\;\;\; (13)
where c is a constant that depends on the sample size n, the number of future observations (averages) k, the sample size associated with the k future averages m, and the confidence level (1α)100\%.
When pi.type="lower"
or pi.type="upper"
, the value of c is the
number that satisfies the following equation (Gupta and Sobel, 1957; Hahn, 1970a):
1  α = \int_{0}^{∞} F_1(cs, k, ρ) h(s√{n1}, n1) √{n1} ds \;\;\;\;\;\; (14)
where
F_1(x, k, ρ) = \int_{∞}^{∞} [Φ(\frac{x + ρ^{1/2}y}{√{1  ρ}})]^k φ(y) dy \;\;\;\;\;\; (15)
ρ = 1 / (\frac{n}{m} + 1) \;\;\;\;\;\; (16)
h(x, ν) = \frac{x^{ν1}e^{x^2/2}}{2^{(ν/2)  1} Γ(\frac{ν}{2})} \;\;\;\;\;\; (17)
and Φ() and φ() denote the cumulative distribution function and probability density function, respectively, of the standard normal distribution. Note that the function h(x, ν) is the probability density function of a chi random variable with ν degrees of freedom.
When pi.type="twosided"
, the value of c is the number that satisfies
the following equation:
1  α = \int_{0}^{∞} F_2(cs, k, ρ) h(s√{n1}, n1) √{n1} ds \;\;\;\;\;\; (18)
where
F_2(x, k, ρ) = \int_{∞}^{∞} [Φ(\frac{x + ρ^{1/2}y}{√{1  ρ}})  Φ(\frac{x + ρ^{1/2}y}{√{1  ρ}})]^k φ(y) dy \;\;\;\;\;\; (19)
Approximate Method Based on the Bonferroni Inequality (method="Bonferroni"
)
As shown above, when k=1, the value of K is given by Equation (8) or
Equation (9) for twosided or onesided prediction intervals, respectively. When
k > 1, a conservative way to construct a (1α^*)100\% prediction
interval for the next k observations or averages is to use a Bonferroni
correction (Miller, 1981a, p.8) and set α = α^*/k in Equation (8)
or (9) (Chew, 1968). This value of K will be conservative in that the computed
prediction intervals will be wider than the exact predictions intervals.
Hahn (1969, 1970a) compared the exact values of K with those based on the
Bonferroni inequality for the case of m=1 and found the approximation to be
quite satisfactory except when n is small, k is large, and α
is large. For example, Gibbons (1987a) notes that for a 99% prediction interval
(i.e., α = 0.01) for the next k observations, if n > 4,
the bias of K is never greater than 1% no matter what the value of k.
A numeric scalar equal to K, the multiplier of estimated standard deviation that is used to construct the prediction interval.
Prediction and tolerance intervals have long been applied to quality control and life testing problems (Hahn, 1970b,c; Hahn and Nelson, 1973). In the context of environmental statistics, prediction intervals are useful for analyzing data from groundwater detection monitoring programs at hazardous and solid waste facilities (e.g., Gibbons et al., 2009; Millard and Neerchal, 2001; USEPA, 2009).
Steven P. Millard ([email protected])
Berthouex, P.M., and L.C. Brown. (2002). Statistics for Environmental Engineers. Lewis Publishers, Boca Raton.
Dunnett, C.W. (1955). A Multiple Comparisons Procedure for Comparing Several Treatments with a Control. Journal of the American Statistical Association 50, 10961121.
Dunnett, C.W. (1964). New Tables for Multiple Comparisons with a Control. Biometrics 20, 482491.
Gibbons, R.D., D.K. Bhaumik, and S. Aryal. (2009). Statistical Methods for Groundwater Monitoring, Second Edition. John Wiley & Sons, Hoboken.
Hahn, G.J. (1969). Factors for Calculating TwoSided Prediction Intervals for Samples from a Normal Distribution. Journal of the American Statistical Association 64(327), 878898.
Hahn, G.J. (1970a). Additional Factors for Calculating Prediction Intervals for Samples from a Normal Distribution. Journal of the American Statistical Association 65(332), 16681676.
Hahn, G.J. (1970b). Statistical Intervals for a Normal Population, Part I: Tables, Examples and Applications. Journal of Quality Technology 2(3), 115125.
Hahn, G.J. (1970c). Statistical Intervals for a Normal Population, Part II: Formulas, Assumptions, Some Derivations. Journal of Quality Technology 2(4), 195206.
Hahn, G.J., and W.Q. Meeker. (1991). Statistical Intervals: A Guide for Practitioners. John Wiley and Sons, New York.
Hahn, G., and W. Nelson. (1973). A Survey of Prediction Intervals and Their Applications. Journal of Quality Technology 5, 178188.
Helsel, D.R., and R.M. Hirsch. (1992). Statistical Methods in Water Resources Research. Elsevier, New York.
Helsel, D.R., and R.M. Hirsch. (2002). Statistical Methods in Water Resources. Techniques of Water Resources Investigations, Book 4, chapter A3. U.S. Geological Survey. (available online at: http://pubs.usgs.gov/twri/twri4a3/).
Millard, S.P., and Neerchal, N.K. (2001). Environmental Statistics with SPLUS. CRC Press, Boca Raton, Florida.
Miller, R.G. (1981a). Simultaneous Statistical Inference. McGrawHill, New York.
USEPA. (2009). Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities, Unified Guidance. EPA 530/R09007, March 2009. Office of Resource Conservation and Recovery Program Implementation and Information Division. U.S. Environmental Protection Agency, Washington, D.C.
USEPA. (2010). Errata Sheet  March 2009 Unified Guidance. EPA 530/R09007a, August 9, 2010. Office of Resource Conservation and Recovery, Program Information and Implementation Division. U.S. Environmental Protection Agency, Washington, D.C.
predIntNorm
, predIntNormSimultaneous
,
predIntLnorm
, tolIntNorm
,
Normal, estimate.object
, enorm
, eqnorm
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39  # Compute the value of K for a twosided 95% prediction interval
# for the next observation given a sample size of n=20.
predIntNormK(n = 20)
#[1] 2.144711
#
# Compute the value of K for a onesided upper 99% prediction limit
# for the next 3 averages of order 2 (i.e., each of the 3 future
# averages is based on a sample size of 2 future observations) given a
# samle size of n=20.
predIntNormK(n = 20, n.mean = 2, k = 3, pi.type = "upper",
conf.level = 0.99)
#[1] 2.258026
#
# Compare the result above that is based on the Bonferroni method
# with the exact method.
predIntNormK(n = 20, n.mean = 2, k = 3, method = "exact",
pi.type = "upper", conf.level = 0.99)
#[1] 2.251084
#
# Example 181 of USEPA (2009, p.189) shows how to construct a 95%
# prediction interval for 4 future observations assuming a
# normal distribution based on arsenic concentrations (ppb) in
# groundwater at a solid waste landfill. There were 4 years of
# quarterly monitoring, and years 13 are considered background,
# So the sample size for the prediciton limit is n = 12,
# and the number of future samples is k = 4.
predIntNormK(n = 12, k = 4, pi.type = "upper")
#[1] 2.698976

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.