Description Usage Arguments Details References See Also Examples
Smoothed bootstrap is an extension of standard bootstrap using kernel densities.
1 2 3 4 5 
data 
vector, matrix, or data.frame. For nonnumeric values standard bootstrap is applied (see below). 
statistic 
a function that is applied to the 
R 
the number of bootstrap replicates. 
bw 
the smoothing bandwidth to be used (see 
kernel 
a character string giving the smoothing kernel to be used.
This must partially match one of "multivariate", "gaussian",
"rectangular", "triangular", "epanechnikov", "biweight", "cosine",
"optcosine", or "none" with default "multivariate", and may be abbreviated.
Using 
weights 
vector of importance weights. It should have as many elements
as there are observations in 
adjust 
scalar; the bandwidth used is actually 
shrinked 
logical; if 
ignore 
vector of names of columns to be ignored during the smoothing phase of bootstrap procedure (their values are not altered using random noise). 
parallel 
if 
workers 
the number of workers used for parallel computing (see 
Smoothed bootstrap is an extension of standard bootstrap procedure, where instead of drawing samples with replacement from the empirical distribution, they are drawn from kernel density estimate of the distribution.
For smoothed bootstrap, points (in univariate case), or rows (in multivariate case), are drawn with
replacement, to obtain samples of size n from the initial dataset of size n, as with
standard bootstrap. Next, random noise from kernel density K is added to each of the drawn
values. The procedure is repeated R times and statistic
is evaluated on each of the
samples.
The noise is added only to the numeric columns, while nonnumeric columns (e.g.
character
, factor
, logical
) are not altered. What follows, to the
nonnumeric columns and columns listed in ignore
parameter standard bootstrap procedure
is applied.
Univariate kernel densities
Univariate kernel density estimator is defined as
f(x) = sum[i](w[i] * Kh(xy[i]))
where w is a vector of weights such that all w[i] ≥ 0 and sum(w) = 1 (by default uniform 1/n weights are used), Kh = K(x/h)/h is kernel K parametrized by bandwidth h and y is a vector of data points used for estimating the kernel density.
To draw samples from univariate kernel density, the following procedure can be applied (Silverman, 1986):
Step 1 Sample i uniformly with replacement from 1,…,n.
Step 2 Generate ε to have probability density K.
Step 3 Set x = y[i] + hε.
If samples are required to have the same variance as data
(i.e. shrinked = TRUE
), then Step 3 is modified
as following:
Step 3' x = m + (y[i]  m + hε)/sqrt(1 + h^2 var(K)/var(y))
where var(K) is variance of the kernel (fixed to 1 for kernels used in this package).
When shrinkage described in Step 3' is applied, the smoothed bootstrap density function changes it's form to
fb(x) = (1+r) f(x + r (xmean(y)))
where r = sqrt(1 + h^2 var(K)/var(y))  1.
This package offers the following univariate kernels:
Gaussian  1/sqrt(2π) exp((u^2)/2) 
Rectangular  1/2 
Triangular  1  u 
Epanchenikov  3/4 (1  u^2) 
Biweight  15/16 (1  u^2)^2 
Cosine  1/2 (1 + cos(π u)) 
Optcosine  π/4 cos(π/2 u) 
All the kernels are rescalled so that their standard deviations are equal to 1, so that bandwidth parameter controls their standard deviations.
Random generation from Epanchenikov kernel is done using algorithm
described by Devroye (1986). For optcosine kernel inverse transform
sampling is used. For biweight kernel random values are drawn from
Beta(3, 3) distribution and
Beta(3.3575, 3.3575)
distribution serves as a close approximation of cosine kernel.
Random generation for triangular kernel is done by taking difference
of two i.i.d. uniform random variates. To sample from rectangular
and Gaussian kernels standard random generation algorithms are used
(see runif
and rnorm
).
Product kernel densities
Univariate kernels may easily be extended to multiple dimensions by using product kernel
f(x) = sum[i](w[i] * prod[j]( Kh[j](x[i]y[i,j) ))
where w is a vector of weights such that all w[i] ≥ 0 and sum(w) = 1 (by default uniform 1/n weights are used), and Kh[j] are univariate kernels K parametrized by bandwidth h[j], where y is a matrix of data points used for estimating the kernel density.
Random generation from product kernel is done by drawing with replacement rows of y, and then adding to the sampled values random noise from univariate kernels K, parametrized by corresponding bandwidth parameters h[j].
Multivariate kernel densities
Multivariate kernel density estimator may also be defined in terms of multivariate kernels KH (e.g. multivariate normal distribution, as in this package)
f(x) = sum[i](w[i] * KH(xy[i]))
where w is a vector of weights such that all w[i] ≥ 0 and sum(w) = 1 (by default uniform 1/n weights are used), KH is kernel K parametrized by bandwidth matrix H and y is a matrix of data points used for estimating the kernel density.
Notice: When using multivariate normal (Gaussian) distribution as a kernel K, the bandwidth parameter H is a covariance matrix as compared to standard deviations used in univariate and product kernels.
Random generation from multivariate kernel is done by drawing with replacement
rows of y, and then adding to the sampled values random noise from
multivariate normal distribution centered at the data points and parametrized
by corresponding bandwidth matrix H. For further details see rmvg
.
Silverman, B. W. (1986). Density estimation for statistics and data analysis. Chapman and Hall/CRC.
Scott, D. W. (1992). Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons.
Efron, B. (1981). Nonparametric estimates of standard error: the jackknife, the bootstrap and other methods. Biometrika, 589599.
Hall, P., DiCiccio, T.J. and Romano, J.P. (1989). On smoothing and the bootstrap. The Annals of Statistics, 692704.
Silverman, B.W. and Young, G.A. (1987). The bootstrap: To smooth or not to smooth? Biometrika, 469479.
Scott, D.W. (1992). Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons.
Wang, S. (1995). Optimizing the smoothed bootstrap. Annals of the Institute of Statistical Mathematics, 47(1), 6580.
Young, G.A. (1990). Alternative smoothed bootstraps. Journal of the Royal Statistical Society. Series B (Methodological), 477484.
De Angelis, D. and Young, G.A. (1992). Smoothing the bootstrap. International Statistical Review/Revue Internationale de Statistique, 4556.
Polansky, A.M. and Schucany, W. (1997). Kernel smoothing to improve bootstrap confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59(4), 821838.
Devroye, L. (1986). Nonuniform random variate generation. New York: SpringerVerlag.
Parzen, E. (1962). On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3), 10651076.
Silverman, B.W. and Young, G.A. (1987). The bootstrap: To smooth or not to smooth? Biometrika, 469479.
Jones, M.C. (1991). On correcting for variance inflation in kernel density estimation. Computational Statistics & Data Analysis, 11, 315.
bw.silv
, density
,
bandwidth
, kernelbootclass
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33  set.seed(1)
# smooth bootstrap of parameters of linear regression
b1 < kernelboot(mtcars, function(data) coef(lm(mpg ~ drat + wt, data = data)) , R = 250)
b1
summary(b1)
b2 < kernelboot(mtcars, function(data) coef(lm(mpg ~ drat + wt, data = data)) , R = 250,
kernel = "epanechnikov")
b2
summary(b2)
# smooth bootstrap of parameters of linear regression
# smoothing phase is not applied to "am" and "cyl" variables
b3 < kernelboot(mtcars, function(data) coef(lm(mpg ~ drat + wt + am + cyl, data = data)) , R = 250,
ignore = c("am", "cyl"))
b3
summary(b3)
# standard bootstrap (without kernel smoothing)
b4 < kernelboot(mtcars, function(data) coef(lm(mpg ~ drat + wt + am + cyl, data = data)) , R = 250,
ignore = colnames(mtcars))
b4
summary(b4)
# smooth bootstrap for median of univariate data
b5 < kernelboot(mtcars$mpg, function(data) median(data) , R = 250)
b5
summary(b5)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.