ks-package: ks

ks-packageR Documentation

ks

Description

Kernel smoothing for data from 1- to 6-dimensions.

Details

There are three main types of functions in this package:

  • computing kernel estimators - these function names begin with ‘k’

  • computing bandwidth selectors - these begin with ‘h’ (1-d) or ‘H’ (>1-d)

  • displaying kernel estimators - these begin with ‘plot’.

The kernel used throughout is the normal (Gaussian) kernel K. For 1-d data, the bandwidth h is the standard deviation of the normal kernel, whereas for multivariate data, the bandwidth matrix \bold{{\rm H}} is the variance matrix.

–For kernel density estimation, kde computes

\hat{f}(\bold{x}) = n^{-1} \sum_{i=1}^n K_{\bold{{\rm H}}} (\bold{x} - \bold{X}_i).

The bandwidth matrix \bold{{\rm H}} is a matrix of smoothing parameters and its choice is crucial for the performance of kernel estimators. For display, its plot method calls plot.kde.

–For kernel density estimation, there are several varieties of bandwidth selectors

  • plug-in hpi (1-d); Hpi, Hpi.diag (2- to 6-d)

  • least squares (or unbiased) cross validation (LSCV or UCV) hlscv (1-d); Hlscv, Hlscv.diag (2- to 6-d)

  • biased cross validation (BCV) Hbcv, Hbcv.diag (2- to 6-d)

  • smoothed cross validation (SCV) hscv (1-d); Hscv, Hscv.diag (2- to 6-d)

  • normal scale hns (1-d); Hns (2- to 6-d).

–For kernel density support estimation, the main function is ksupp which is (the convex hull of)

\{\bold{x}: \hat{f}(\bold{x}) > \tau\}

for a suitable level \tau. This is closely related to the \tau-level set of \hat{f}.

–For truncated kernel density estimation, the main function is kde.truncate

\hat{f} (\bold{x}) \bold{1}\{\bold{x} \in \Omega\} / \int_{\Omega}\hat{f} (\bold{x}) \, d\bold{x}

for a bounded data support \Omega. The standard density estimate \hat{f} is truncated and rescaled to give unit integral over \Omega. Its plot method calls plot.kde.

–For boundary kernel density estimation where the kernel function is modified explicitly in the boundary region, the main function is kde.boundary

n^{-1} \sum_{i=1}^n K^*_{\bold{{\rm H}}} (\bold{x} - \bold{X}_i)

for a boundary kernel K^*. Its plot method calls plot.kde.

–For variable kernel density estimation where the bandwidth is not a constant matrix, the main functions are kde.balloon

\hat{f}_{\rm ball}(\bold{x}) = n^{-1} \sum_{i=1}^n K_{\bold{{\rm H}}(\bold{x})} (\bold{x} - \bold{X}_i)

and kde.sp

\hat{f}_{\rm SP}(\bold{x}) = n^{-1} \sum_{i=1}^n K_{\bold{{\rm H}}(\bold{X}_i)} (\bold{x} - \bold{X}_i).

For the balloon estimation \hat{f}_{\rm ball} the bandwidth varies with the estimation point \bold{x}, whereas for the sample point estimation \hat{f}_{\rm SP} the bandwidth varies with the data point \bold{X}_i, i=1,\dots,n. Their plot methods call plot.kde. The bandwidth selectors for kde.balloon are based on the normal scale bandwidth Hns(,deriv.order=2) via the MSE minimal formula, and for kde.SP on Hns(,deriv.order=4) via the Abramson formula.

–For kernel density derivative estimation, the main function is kdde

{\sf D}^{\otimes r}\hat{f}(\bold{x}) = n^{-1} \sum_{i=1}^n {\sf D}^{\otimes r}K_{\bold{{\rm H}}} (\bold{x} - \bold{X}_i).

The bandwidth selectors are a modified subset of those for kde, i.e. Hlscv, Hns, Hpi, Hscv with deriv.order>0. Its plot method is plot.kdde for plotting each partial derivative singly.

–For kernel summary curvature estimation, the main function is kcurv

\hat{s}(\bold{x})= - \bold{1}\{{\sf D}^2 \hat{f}(\bold{x}) < 0\} \mathrm{abs}(|{\sf D}^2 \hat{f}(\bold{x})|)

where {\sf D}^2 \hat{f}(\bold{x}) is the kernel Hessian matrix estimate. It has the same structure as a kernel density estimate so its plot method calls plot.kde.

–For kernel discriminant analysis, the main function is kda which computes density estimates for each the groups in the training data, and the discriminant surface. Its plot method is plot.kda. The wrapper function hkda, Hkda computes bandwidths for each group in the training data for kde, e.g. hpi, Hpi.

–For kernel functional estimation, the main function is kfe which computes the r-th order integrated density functional

\hat{{\bold \psi}}_r = n^{-2} \sum_{i=1}^n \sum_{j=1}^n {\sf D}^{\otimes r}K_{\bold{{\rm H}}}(\bold{X}_i-\bold{X}_j).

The plug-in selectors are hpi.kfe (1-d), Hpi.kfe (2- to 6-d). Kernel functional estimates are usually not required to computed directly by the user, but only within other functions in the package.

–For kernel-based 2-sample testing, the main function is kde.test which computes the integrated L_2 distance between the two density estimates as the test statistic, comprising a linear combination of 0-th order kernel functional estimates:

\hat{T} = \hat{\psi}_{0,1} + \hat{\psi}_{0,2} - (\hat{\psi}_{0,12} + \hat{\psi}_{0,21}),

and the corresponding p-value. The \psi are zero order kernel functional estimates with the subscripts indicating that 1 = sample 1 only, 2 = sample 2 only, and 12, 21 = samples 1 and 2. The bandwidth selectors are hpi.kfe, Hpi.kfe with deriv.order=0.

–For kernel-based local 2-sample testing, the main function is kde.local.test which computes the squared distance between the two density estimates as the test statistic

\hat{U}(\bold{x}) = [\hat{f}_1(\bold{x}) - \hat{f}_2(\bold{x})]^2

and the corresponding local p-values. The bandwidth selectors are those used with kde, e.g. hpi, Hpi.

–For kernel cumulative distribution function estimation, the main function is kcde

\hat{F}(\bold{x}) = n^{-1} \sum_{i=1}^n \mathcal{K}_{\bold{{\rm H}}} (\bold{x} - \bold{X}_i)

where \mathcal{K} is the integrated kernel. The bandwidth selectors are hpi.kcde, Hpi.kcde. Its plot method is plot.kcde. There exist analogous functions for the survival function \hat{\bar{F}}.

–For kernel estimation of a ROC (receiver operating characteristic) curve to compare two samples from \hat{F}_1, \hat{F}_2, the main function is kroc

\{\hat{F}_{\hat{Y}_1}(z), \hat{F}_{\hat{Y}_2}(z)\}

based on the cumulative distribution functions of \hat{Y}_j = \hat{\bar{F}}_1(\bold{X}_j), j=1,2.

The bandwidth selectors are those used with kcde, e.g. hpi.kcde, Hpi.kcde for \hat{F}_{\hat{Y}_j}, \hat{\bar{F}}_1. Its plot method is plot.kroc.

–For kernel estimation of a copula, the main function is kcopula

\hat{C}(\bold{z}) = \hat{F}(\hat{F}_1^{-1}(z_1), \dots, \hat{F}_d^{-1}(z_d))

where \hat{F}_j^{-1}(z_j) is the z_j-th quantile of of the j-th marginal distribution \hat{F}_j. The bandwidth selectors are those used with kcde for \hat{F}, \hat{F}_j. Its plot method is plot.kcde.

–For kernel mean shift clustering, the main function is kms. The mean shift recurrence relation of the candidate point {\bold x}

{\bold x}_{j+1} = {\bold x}_j + \bold{{\rm H}} {\sf D} \hat{f}({\bold x}_j)/\hat{f}({\bold x}_j),

where j\geq 0 and {\bold x}_0 = {\bold x}, is iterated until {\bold x} converges to its local mode in the density estimate \hat{f} by following the density gradient ascent paths. This mode determines the cluster label for \bold{x}. The bandwidth selectors are those used with kdde(,deriv.order=1).

–For kernel density ridge estimation, the main function is kdr. The kernel density ridge recurrence relation of the candidate point {\bold x}

{\bold x}_{j+1} = {\bold x}_j + \bold{{\rm U}}_{(d-1)}({\bold x}_j)\bold{{\rm U}}_{(d-1)}({\bold x}_j)^T \bold{{\rm H}} {\sf D} \hat{f}({\bold x}_j)/\hat{f}({\bold x}_j),

where j\geq 0, {\bold x}_0 = {\bold x} and \bold{{\rm U}}_{(d-1)} is the 1-dimensional projected density gradient, is iterated until {\bold x} converges to the ridge in the density estimate. The bandwidth selectors are those used with kdde(,deriv.order=2).

– For kernel feature significance, the main function kfs. The hypothesis test at a point \bold{x} is H_0(\bold{x}): \mathsf{H} f(\bold{x}) < 0, i.e. the density Hessian matrix \mathsf{H} f(\bold{x}) is negative definite. The test statistic is

W(\bold{x}) = \Vert \mathbf{S}(\bold{x})^{-1/2} \mathrm{vech} \ \mathsf{H} \hat{f} (\bold{x})\Vert ^2

where {\sf H}\hat{f} is the Hessian estimate, vech is the vector-half operator, and \mathbf{S} is an estimate of the null variance. W(\bold{x}) is approximately \chi^2 distributed with d(d+1)/2 degrees of freedom. If H_0(\bold{x}) is rejected, then \bold{x} belongs to a significant modal region. The bandwidth selectors are those used with kdde(,deriv.order=2). Its plot method is plot.kfs.

–For deconvolution density estimation, the main function is kdcde. A weighted kernel density estimation with the contaminated data {\bold W}_1, \dots, {\bold W}_n,

\hat{f}_w({\bold x}) = n^{-1} \sum_{i=1}^n \alpha_i K_{\bold{{\rm H}}}({\bold x} - {\bold W}_i),

is utilised, where the weights \alpha_1, \dots, \alpha_n are chosen via a quadratic optimisation involving the error variance and the regularisation parameter. The bandwidth selectors are those used with kde.

–Binned kernel estimation is an approximation to the exact kernel estimation and is available for d=1, 2, 3, 4. This makes kernel estimators feasible for large samples.

–For an overview of this package with 2-d density estimation, see vignette("kde").

–For ks \geq 1.11.1, the misc3d and rgl (3-d plot), OceanView (quiver plot), oz (Australian map) packages have been moved from Depends to Suggests. This was done to allow ks to be installed on systems where these latter graphical-based packages can't be installed. Furthermore, since the future of OpenGL in R is not certain, plot3D becomes the default for 3D plotting for ks \geq 1.12.0. RGL plots are still supported though these may be deprecated in the future.

Author(s)

Tarn Duong for most of the package. M. P. Wand for the binned estimation, univariate plug-in selector and univariate density derivative estimator code. J. E. Chacon for the unconstrained pilot functional estimation and fast implementation of derivative-based estimation code. A. and J. Gramacki for the binned estimation for unconstrained bandwidth matrices.

References

Bowman, A. & Azzalini, A. (1997) Applied Smoothing Techniques for Data Analysis. Oxford University Press, Oxford.

Chacon, J.E. & Duong, T. (2018) Multivariate Kernel Smoothing and Its Applications. Chapman & Hall/CRC, Boca Raton.

Duong, T. (2004) Bandwidth Matrices for Multivariate Kernel Density Estimation. Ph.D. Thesis, University of Western Australia.

Scott, D.W. (2015) Multivariate Density Estimation: Theory, Practice, and Visualization (2nd edn). John Wiley & Sons, New York.

Silverman, B. (1986) Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC, London.

Simonoff, J. S. (1996) Smoothing Methods in Statistics. Springer-Verlag, New York.

Wand, M.P. & Jones, M.C. (1995) Kernel Smoothing. Chapman & Hall/CRC, London.

See Also

feature, sm, KernSmooth


ks documentation built on Aug. 11, 2023, 1:10 a.m.