Ake-package: Associated kernel estimations

Ake-packageR Documentation

Associated kernel estimations

Description

Continuous and discrete estimation of density dke.fun, probability mass function (p.m.f.) kpmfe.fun and regression reg.fun functions are performed using continuous and discrete associated kernels. The cross-validation technique hcvc.fun, hcvreg.fun and the Bayesian procedure hbay.fun are also implemented for bandwidth selection.

Details

The estimated density or p.m.f:

The associated kernel estimator \widehat{f}_n of f is defined as

\widehat{f}_n(x) = \frac{1}{n}∑_{i=1}^{n}{K_{x,h}(X_i)},

where K_{x,h} is one of the kernels kef defined below. In practice, we first calculate the global normalizing constant

{C}_n = \int_{x\in T}{\widehat{f}_n(x) ν(dx)},

where T is the support of the density or p.m.f. function and ν is the Lebesgue or count measure on T. For both continuous and discrete associated kernels, this normalizing constant is not generally equal to 1 and it will be computed. The represented density or p.m.f. estimate is then \tilde{f}_n=\widehat{f}_n/C_n.

For discrete data, the integrated squared error (ISE) defined by

{ISE}_0 = ∑_{x\in N}{{\{\tilde{f}_n(x)} - f_0(x)\}^2}

is the criteria used to measure the smoothness of the associated kernel estimator \tilde{f}_n with the empirical p.m.f. f_0; see Kokonendji and Senga Kiessé (2011).

The estimated regressor:

Both in continuous and discrete cases, considering the relation between a response variable y and an explanatory variable x given by

y=m(x)+ε ,

where m is an unknown regession function on T and ε the disturbance term with null mean and finite variance. Let (x_1,y_1),…,(x_n,y_n) be a sequence of independent and identically distributed (iid) random vectors on T\times R with m(x)=E(y|x). The well-known Nadaraya-Watson estimator using associated kernels is \widehat{m}_n defined as

\widehat{m}_n(x) = ∑_{i=1}^{n}{ω_{x}(X_i)Y_i},

where ω_{x}(X_i)=K_{x,h}(X_i)/∑_{i=1}^{n}{K_{x,h}(X_i)} and K_{x,h} is one of the associated kernels defined below.

Beside the criterion of kernel support, we retain the root mean squared error (RMSE) and also the practical coefficient of determination R^2 defined respectively by

RMSE = √{\frac{1}{n}∑_{i=1}^{n}{\{y_i-\widehat{m}_n(x_i)\}^2}}

and

R^2=\frac{∑_{i=1}^{n}{\{\widehat{m}_n(x_i)-\bar{y}\}^2}}{∑_{i=1}^{n}(y_i-\bar{y})^2},

where \bar{y}=n^{-1}(y_1+…+y_n); see Kokonendji et al. (2009).

Given a data sample, the package allows to compute the density or p.m.f. and regression functions using one of the seven associated kernels: extended beta, lognormal, gamma, reciprocal inverse Gaussian for continuous data, DiracDU for categorical data, and binomial and discrete triangular for count data. The bandwidth parameter is computed using the cross-validation technique. When the associated kernel function is binomial, the bandwidth parameter is also computed using the local Bayesian procedure. The associated kernel functions are defined below. The first four kernels are for continuous data and the last three kernels are for discrete case.

Extended beta kernel:

The extended beta kernel is defined on {S}_{x,h,a,b}=[a,b]=T with a<b<∞, x \in T and h>0:

BE_{x,h,a,b}(y) = \frac {(y-a)^{(x-a)/\{(b-a)h\}}(b-y)^{(b-x)/\{(b-a)h\}}} {(b-a)^{1+h^{-1}}B≤ft(1+(x-a)/(b-a)h,1+(b-x)/(b-a)h\right)}1_{S_{x,h,a,b}}(y),

where B(r,s)=\int_0^1 t^{r-1}(1-t)^{s-1}dt is the usual beta function with r>0, s>0 and 1[A] denotes the indicator function of A. For a=0 and b=1, it corresponds to the beta kernel which is the probability density function of the beta distribution with shape parameters 1+x/h and (1-x)/h; see Libengué (2013).

Gamma kernel:

The gamma kernel is defined on {S}_{x,h}=[0, ∞)=T with x \in T and h>0 by

GA_{x,h}(y) = \frac {y^{x/h}} {Γ(1+x/h)h^{1+x/h}}exp≤ft(-\frac{y}{h} \right)1_{S_{x,h}}(y),

where Γ(z)=\int_0^∞ t^{z-1}e^{-t}dt is the classical gamma function. The probability density function GA_{x,h} is the gamma distribution with scale parameter 1+x/h and shape parameter h; see Chen (2000).

Lognormal kernel:

The lognormal kernel is defined on {S}_{x,h}=[0,∞)=T with x \in T and h>0 by

LN_{x,h}(y) = \frac {1} {yh√{2π}}exp≤ft\{-\frac{1}{2}≤ft(\frac{1}{h}log(\frac{y}{x})-h \right)^{2}\right\}1_{S_{x,h}}(y).

It is the probability density function of the classical lognormal distribution with parameters log(x)+h^{2} and h; see Libengué (2013).

Binomial kernel:

Let x\in N:= \{0, 1, … \} and {S}_x = \{0, 1, …, x + 1\}. The Binomial kernel is defined on the support {S}_x by

B_{x,h}(y) = \frac {(x+1)!} {y!(x+1-y)!}≤ft(\frac{x+h}{x+1}\right)^y≤ft(\frac{1-h}{x+1}\right)^{(x+1-y)}1_{S_{x}}(y),

where h\in(0, 1]. Note that B_[x,h] is the p.m.f. of the binomial distribution with its number of trials x+1 and its success probability (x+h)/(x+1); see Kokonendji and Senga Kiessé (2011).

Discrete triangular kernel:

For fixed arm a\in N, we define {S}_{x,a} = \{x-a,…, x, …, x + a\}. The discrete triangular kernel is defined on {S}_{x,a} by

DT_{x,h;a}(y) = \frac {(a+1)^h - |y-x|^h} {P(a,h)}1_{S_{x,a}}(y),

where x\in N, h>0 and P(a,h)=(2a+1)(a+1)^h - 2(1+2^h+ \cdots +a^h) is the normalizing constant. For a=0, the Discrete Triangular kernel DT_[x,h;0] corresponds to the Dirac kernel on x; see Kokonendji et al. (2007), and also Kokonendji and Zocchi (2010) for an asymmetric version of discrete triangular.

DiracDU kernel:

For fixed number of categories c\in \{2,3,...\} , we define {S}_{c} = \{0, 1, …, c-1\}. The DiracDU kernel is defined on {S}_{c} by

DU_{x,h;c}(y) = (1 - h)1_{\{x\}}(y)+\frac {h} {c-1}1_{S_{c}\setminus\{x\}}(y),

where x\in {S}_{c} and h\in(0, 1]. See Kokonendji and Senga Kiessé (2011), and also Aitchison and Aitken (1976) for multivariate case.

Note that the global normalizing constant is 1 for DiracDU.

The bandwidth selection:

Two functions are implemented to select the bandwidth: cross-validation and local Bayesian procedure. The cross-validation technique is used for all the associated kernels both in density and regression; see Kokonendji and Senga Kiessé (2011). The local Bayesian procedure is implemented to select the bandwidth in the estimation of p.m.f. when using binomial kernel; see Zougab et al. (2014).

In the coming versions of the package, adaptive Bayesian procedure will be included for bandwidth selection in density estimation when using gamma kernel. A global Bayesian procedure will also be implemented for bandwidth selection in regression when using binomial kernel.

Author(s)

W. E. Wansouwé, S. M. Somé and C. C. Kokonendji

Maintainer: W. E. Wansouwé <ericwansouwe@gmail.com>

References

Aitchison, J. and Aitken, C.G.G. (1976). Multivariate binary discrimination by the kernel method, Biometrika 63, 413 - 420.

Chen, S. X. (1999). Beta kernels estimators for density functions, Computational Statistics and Data Analysis 31, 131 - 145.

Chen, S. X. (2000). Probability density function estimation using gamma kernels, Annals of the Institute of Statistical Mathematics 52, 471 - 480.

Igarashi, G. and Kakizawa, Y. (2015). Bias correction for some asymmetric kernel estimators, Journal of Statistical Planning and Inference 159, 37 - 63.

Kokonendji, C.C. and Senga Kiessé, T. (2011). Discrete associated kernel method and extensions, Statistical Methodology 8, 497 - 516.

Kokonendji, C.C., Senga Kiessé, T. and Demétrio, C.G.B. (2009). Appropriate kernel regression on a count explanatory variable and applications, Advances and Applications in Statistics 12, 99 - 125.

Libengue, F.G. (2013). Méthode Non-Paramétrique par Noyaux Associés Mixtes et Applications, Ph.D. Thesis Manuscript (in French) to Université de Franche-Comté, Besançon, France and Université de Ouagadougou, Burkina Faso, June 2013, LMB no. 14334, Besançon.

Zougab, N., Adjabi, S. and Kokonendji, C.C. (2014). Bayesian approach in nonparametric count regression with binomial kernel, Communications in Statistics - Simulation and Computation 43, 1052 - 1063.


Ake documentation built on June 13, 2022, 5:07 p.m.