| npcdistbw | R Documentation |
npcdistbw computes a condbandwidth object for estimating
a p+q-variate kernel conditional cumulative distribution
estimator defined over mixed continuous and discrete (unordered
xdat, ordered xdat and ydat) data using either
the normal-reference rule-of-thumb or least-squares cross validation
method of Li and Racine (2008) and Li, Lin and Racine
(2013).
npcdistbw(...)
## S3 method for class 'formula'
npcdistbw(formula,
data,
subset,
na.action,
call,
gdata = NULL,
...)
## S3 method for class 'condbandwidth'
npcdistbw(xdat = stop("data 'xdat' missing"),
ydat = stop("data 'ydat' missing"),
gydat = NULL,
bws,
bandwidth.compute = TRUE,
cfac.dir = 2.5*(3.0-sqrt(5)),
scale.factor.init = 0.5,
dfac.dir = 0.25*(3.0-sqrt(5)),
dfac.init = 0.375,
dfc.dir = 3,
do.full.integral = FALSE,
ftol = 1.490116e-07,
scale.factor.init.upper = 2.0,
hbd.dir = 1,
hbd.init = 0.9,
initc.dir = 1.0,
initd.dir = 1.0,
invalid.penalty = c("baseline","dbmax"),
itmax = 10000,
lbc.dir = 0.5,
scale.factor.init.lower = 0.1,
lbd.dir = 0.1,
lbd.init = 0.1,
memfac = 500.0,
ngrid = 100,
nmulti,
penalty.multiplier = 10,
remin = TRUE,
scale.init.categorical.sample = FALSE,
scale.factor.search.lower = NULL,
small = 1.490116e-05,
tol = 1.490116e-04,
transform.bounds = FALSE,
...)
## Default S3 method:
npcdistbw(xdat = stop("data 'xdat' missing"),
ydat = stop("data 'ydat' missing"),
gydat,
bws,
bandwidth.compute = TRUE,
bwmethod,
bwscaling,
bwtype,
cfac.dir,
scale.factor.init,
cxkerbound,
cxkerlb,
cxkerorder,
cxkertype,
cxkerub,
cykerbound,
cykerlb,
cykerorder,
cykertype,
cykerub,
dfac.dir,
dfac.init,
dfc.dir,
do.full.integral,
ftol,
scale.factor.init.upper,
hbd.dir,
hbd.init,
initc.dir,
initd.dir,
invalid.penalty,
itmax,
lbc.dir,
scale.factor.init.lower,
lbd.dir,
lbd.init,
memfac,
ngrid,
nmulti,
oxkertype,
oykertype,
penalty.multiplier,
remin,
scale.init.categorical.sample,
scale.factor.search.lower = NULL,
small,
tol,
transform.bounds,
uxkertype,
regtype = c("lc", "ll", "lp"),
basis = c("glp", "additive", "tensor"),
degree = NULL,
degree.select = c("manual", "coordinate", "exhaustive"),
search.engine = c("nomad+powell", "cell", "nomad"),
nomad = FALSE,
nomad.nmulti = 0L,
degree.min = NULL,
degree.max = NULL,
degree.start = NULL,
degree.restarts = 0L,
degree.max.cycles = 20L,
degree.verify = FALSE,
bernstein.basis = FALSE,
...)
These arguments identify the data, formula interface, optional distribution grid, and whether bandwidths are supplied or computed.
bandwidth.compute |
a logical value which specifies whether to do a numerical search for
bandwidths or not. If set to |
bws |
a bandwidth specification. This can be set as a |
call |
the original function call. This is passed internally by
|
data |
an optional data frame, list or environment (or object
coercible to a data frame by |
formula |
a symbolic description of variables on which bandwidth selection is to be performed. The details of constructing a formula are described below. |
gdata |
a grid of data on which the indicator function for least-squares cross-validation is to be computed (can be the sample or a grid of quantiles). |
gydat |
a grid of data on which the indicator function for
least-squares cross-validation is to be computed (can be the sample
or a grid of quantiles for |
na.action |
a function which indicates what should happen when the data contain
|
subset |
an optional vector specifying a subset of observations to be used in the fitting process. |
xdat |
a |
ydat |
a |
These arguments control automatic local-polynomial degree search when regtype="lp".
degree.max |
optional scalar or integer vector giving upper bounds for automatic
degree search over continuous |
degree.max.cycles |
positive integer giving the maximum number of coordinate-search
sweeps over the degree vector. Ignored for |
degree.min |
optional scalar or integer vector giving lower bounds for automatic
degree search over continuous |
degree.restarts |
non-negative integer giving the number of additional deterministic
coordinate-search restarts. Ignored for |
degree.select |
character string controlling local-polynomial degree handling when
|
degree.start |
optional starting degree vector for automatic coordinate search. If
omitted, the search starts from the degree-zero local-constant
baseline on the continuous |
degree.verify |
logical value indicating whether a coordinate-search solution should
be exhaustively verified over the admissible degree grid after the
heuristic phase completes. Available only for
|
These arguments choose the selection criterion and the way continuous bandwidths are represented.
bwmethod |
which method to use to select bandwidths.
|
bwscaling |
a logical value that when set to |
bwtype |
character string used for the continuous variable bandwidth type,
specifying the type of bandwidth to compute and return in the
|
These controls set categorical search starts and categorical direction-set initialization.
dfac.dir |
stretch factor for direction set search for Powell's algorithm for categorical variables. See Details |
dfac.init |
non-random initial values for scale factors for categorical variables for Powell's algorithm. See Details |
hbd.dir |
upper bound for direction set search for Powell's algorithm for categorical variables. See Details |
hbd.init |
upper bound for scale factors for categorical variables for Powell's algorithm. See Details |
initd.dir |
initial non-random values for direction set search for Powell's algorithm for categorical variables. See Details |
lbd.dir |
lower bound for direction set search for Powell's algorithm for categorical variables. See Details |
lbd.init |
lower bound for scale factors for categorical variables for Powell's algorithm. See Details |
scale.init.categorical.sample |
a logical value that when set
to |
These controls set Powell direction-set initialization for continuous variables.
cfac.dir |
stretch factor for direction set search for Powell's algorithm for |
dfc.dir |
chi-square degrees of freedom for direction set search for Powell's algorithm for |
initc.dir |
initial non-random values for direction set search for Powell's algorithm for |
lbc.dir |
lower bound for direction set search for Powell's algorithm for |
These controls choose and parameterize bounded support for continuous kernels.
cxkerbound |
character string controlling continuous-kernel support handling for
|
cxkerlb |
numeric scalar/vector of lower bounds for continuous |
cxkerub |
numeric scalar/vector of upper bounds for continuous |
cykerbound |
character string controlling continuous-kernel support handling for
|
cykerlb |
numeric scalar/vector of lower bounds for continuous |
cykerub |
numeric scalar/vector of upper bounds for continuous |
These controls define deterministic and random continuous scale-factor starts and the lower admissibility floor for fixed-bandwidth search.
scale.factor.init |
deterministic initial scale factor for continuous fixed-bandwidth
search. Defaults to |
scale.factor.init.lower |
lower endpoint for random continuous scale-factor starts. Defaults
to |
scale.factor.init.upper |
upper endpoint for random continuous scale-factor starts. Defaults
to |
scale.factor.search.lower |
optional nonnegative scalar giving the hard lower admissibility
bound for continuous fixed-bandwidth search candidates. Defaults to
|
These controls tune the conditional distribution-function integral and grid calculations.
do.full.integral |
a logical value which when set as |
memfac |
The algorithm to compute the least-squares objective function uses a block-based algorithm to eliminate or minimize redundant kernel evaluations. Due to memory, hardware and software constraints, a maximum block size must be imposed by the algorithm. This block size is roughly equal to memfac*10^5 elements. Empirical tests on modern hardware find that a memfac of around 500 performs well. If you experience out of memory errors, or strange behaviour for large data sets (>100k elements) setting memfac to a lower value may fix the problem. |
ngrid |
integer number of grid points to use when computing the moment-based
integral. Defaults to |
These controls choose continuous, unordered, and ordered kernels for xdat and ydat.
cxkerorder |
numeric value specifying kernel order for
|
cxkertype |
character string used to specify the continuous kernel type for
|
cykerorder |
numeric value specifying kernel order for
|
cykertype |
character string used to specify the continuous kernel type for
|
oxkertype |
character string used to specify the ordered categorical kernel
type for |
oykertype |
character string used to specify the ordered categorical kernel
type for |
uxkertype |
character string used to specify the unordered categorical kernel
type for |
These arguments control the local-polynomial estimator, basis, and fixed degree specification.
basis |
character string specifying the polynomial basis used when
|
bernstein.basis |
logical value controlling Bernstein basis evaluation for
|
degree |
integer scalar or integer vector of polynomial degrees for
continuous |
regtype |
character string specifying the conditional local method used for
the |
These arguments control the optional NOMAD direct-search route for local-polynomial degree and bandwidth search.
nomad |
logical shortcut for the recommended automatic local-polynomial
NOMAD route. When |
nomad.nmulti |
non-negative integer controlling the inner
|
search.engine |
character string controlling the automatic local-polynomial search
backend when |
These controls set optimizer tolerances, restart behavior, invalid-candidate penalties, memory blocking, and bounded search transformations.
ftol |
fractional tolerance on the value of the cross-validation function
evaluated at located minima (of order the machine precision or
perhaps slightly larger so as not to be diddled by
roundoff). Defaults to |
invalid.penalty |
a character string specifying the penalty
used when the optimizer encounters invalid bandwidths.
|
itmax |
integer number of iterations before failure in the numerical
optimization routine. Defaults to |
nmulti |
integer number of times to restart the process of finding extrema of the cross-validation function from different (random) initial points |
penalty.multiplier |
a numeric multiplier applied to the
baseline penalty when |
remin |
a logical value which when set as |
small |
a small number used to bracket a minimum (it is hopeless to ask for
a bracketing interval of width less than sqrt(epsilon) times its
central value, a fractional width of only about 10-04 (single
precision) or 3x10-8 (double precision)). Defaults to |
tol |
tolerance on the position of located minima of the cross-validation
function (tol should generally be no smaller than the square root of
your machine's floating point precision). Defaults to |
transform.bounds |
a logical value that when set to |
These arguments collect remaining controls passed through S3 methods.
... |
additional arguments supplied to specify the bandwidth type, kernel types, selection methods, and so on, detailed below. |
The scale.factor.* controls are dimensionless search
controls. The package converts scale factors to bandwidths using the
estimator-specific scaling encoded in the bandwidth object, including
kernel order and the number of continuous variables relevant for the
estimator. Users should not pre-multiply these controls by sample-size
or standard-deviation factors.
scale.factor.init controls the deterministic first search
start. scale.factor.init.lower and
scale.factor.init.upper define the random multistart interval.
scale.factor.search.lower is the lower admissibility bound for
continuous fixed-bandwidth search candidates. The effective first
start is max(scale.factor.init, scale.factor.search.lower),
and the effective random-start lower endpoint is
max(scale.factor.init.lower, scale.factor.search.lower).
scale.factor.init.upper must be at least that effective lower
endpoint; the package errors rather than silently expanding the user's
interval.
When scale.factor.search.lower is NULL, an existing
bandwidth object's stored floor is inherited when available;
otherwise the package default 0.1 is used. Explicit bandwidths
supplied for storage with bandwidth.compute = FALSE are not
rewritten by the search floor.
Categorical search-start controls such as dfac.init,
lbd.init, and hbd.init have separate semantics and are
not affected by scale.factor.search.lower.
Documentation guide: see np.kernels for kernels, np.options for global options, and plot for plotting options.
The bandwidth-selection argument surface is easiest to read by
decision group. Start with the data and bandwidth inputs
(xdat, ydat, gydat, bws, and
bandwidth.compute), then choose the bandwidth criterion and
representation (bwmethod, bwscaling, and
bwtype). Next choose continuous kernel and support controls
(cxker* and cyker*), categorical kernel controls
(uxkertype, oxkertype, and oykertype), and
numerical search controls including nmulti, tolerances,
penalties, and the scale.factor.* search-start and
admissibility controls. Local-polynomial and NOMAD controls
(regtype, basis, degree*,
search.engine, nomad, nomad.nmulti, and
bernstein.basis) are relevant when using the explicit
local-polynomial route.
For S3 plotting help, use methods("plot") and query
class-specific help topics such as ?plot.npregression and
?plot.rbandwidth. You can inspect implementations with
getS3method("plot","npregression").
npcdistbw implements a variety of methods for choosing
bandwidths for multivariate distributions (p+q-variate) defined
over a set of possibly continuous and/or discrete (unordered
xdat, ordered xdat and ydat) data. The approach
is based on Li and Racine (2004) who employ ‘generalized
product kernels’ that admit a mix of continuous and discrete data
types.
The cross-validation methods employ multivariate numerical search
algorithms. For fixed local-constant/local-linear fits, and for
local-polynomial fits with degree.select="manual", bandwidth
search uses multidimensional Powell direction-set optimization.
Bandwidths can (and will) differ for each variable which is, of course, desirable.
Three classes of kernel estimators for the continuous data types are
available: fixed, adaptive nearest-neighbor, and generalized
nearest-neighbor. Adaptive nearest-neighbor bandwidths change with
each sample realization in the set, x_i, when estimating
the cumulative distribution at the point x. Generalized nearest-neighbor
bandwidths change with the point at which the cumulative distribution is estimated,
x. Fixed bandwidths are constant over the support of x.
npcdistbw may be invoked either with a formula-like
symbolic
description of variables on which bandwidth selection is to be
performed or through a simpler interface whereby data is passed
directly to the function via the xdat and ydat
parameters. Use of these two interfaces is mutually exclusive.
Data contained in the data frame xdat may be a mix of
continuous (default), unordered discrete (to be specified in the data
frames using factor), and ordered discrete (to be
specified in the data frames using ordered). Data
contained in the data frame ydat may be a mix of continuous
(default) and ordered discrete (to be specified in the data frames
using ordered). Data can be entered in an arbitrary
order and data types will be detected automatically by the routine
(see np for details).
Data for which bandwidths are to be estimated may be specified
symbolically. A typical description has the form dependent data
~ explanatory data,
where dependent data and explanatory data are both
series of variables specified by name, separated by
the separation character '+'. For example, y1 + y2 ~ x1 + x2
specifies that the bandwidths for the joint distribution of variables
y1 and y2 conditioned on x1 and x2 are to
be estimated. See below for further examples.
A variety of kernels may be specified by the user. Kernels implemented for continuous data types include the second, fourth, sixth, and eighth order Gaussian and Epanechnikov kernels, and the uniform kernel. Unordered discrete data types use a variation on Aitchison and Aitken's (1976) kernel, while ordered data types use a variation of the Wang and van Ryzin (1981) kernel.
When regtype="lp" and degree.select != "manual",
npcdistbw can jointly determine the xdat-side local
polynomial degree vector and the fixed bandwidth coordinates entering
the conditional distribution criterion. With
search.engine="cell", the criterion is profiled over the
admissible degree grid using cached coordinate-wise or exhaustive
search. With search.engine="nomad" or
"nomad+powell", the criterion is optimized directly over the
joint degree/bandwidth space using crs::snomadr();
"nomad+powell" then performs one Powell hot start from the
NOMAD solution and keeps the better of the direct NOMAD and polished
answers. This polynomial-adaptive joint-search route is motivated by
Hall and Racine (2015) together with Li, Li, and Racine (under
revision). When bernstein.basis is not explicitly supplied,
the automatic search route defaults to bernstein.basis=TRUE
for numerical stability.
Setting nomad=TRUE is a convenience preset for this automatic
LP route, not a generic optimizer alias. For conditional distribution
bandwidth selection it expands any missing values to the equivalent
long-form call
npcdistbw(...,
regtype = "lp",
search.engine = "nomad+powell",
degree.select = "coordinate",
bernstein.basis = TRUE,
degree.min = 0L,
degree.max = 10L,
degree.verify = FALSE,
bwtype = "fixed")
Compatible explicit tuning arguments are respected. Incompatible explicit settings fail fast so the shortcut never silently changes user-selected semantics.
The optimizer invoked for search is Powell's conjugate direction
method which requires the setting of (non-random) initial values and
search directions for bandwidths, and, when restarting, random values
for successive invocations. Bandwidths for numeric variables
are scaled by robust measures of spread, the sample size, and the
number of numeric variables where appropriate. Two sets of
parameters for bandwidths for numeric can be modified, those
for initial values for the parameters themselves, and those for the
directions taken (Powell's algorithm does not involve explicit
computation of the function's gradient). The default values are set by
considering search performance for a variety of difficult test cases
and simulated cases. We highly recommend restarting search a large
number of times to avoid the presence of local minima (achieved by
modifying nmulti). Further refinement for difficult cases can
be achieved by modifying these sets of parameters. However, these
parameters are intended more for the authors of the package to enable
‘tuning’ for various methods rather than for the user themselves.
npcdistbw returns a condbandwidth object, with the
following components:
xbw |
bandwidth(s), scale factor(s) or nearest neighbours for the
explanatory data, |
ybw |
bandwidth(s), scale factor(s) or nearest neighbours for the
dependent data, |
fval |
objective function value at minimum |
if bwtype is set to fixed, an object containing
bandwidths (or scale factors if bwscaling = TRUE) is
returned. If it is set to generalized_nn or adaptive_nn,
then instead the kth nearest neighbors are returned for the
continuous variables while the discrete kernel bandwidths are returned
for the discrete variables.
The functions predict, summary and plot support
objects of type condbandwidth.
If you are using data of mixed types, then it is advisable to use the
data.frame function to construct your input data and not
cbind, since cbind will typically not work as
intended on mixed data types and will coerce the data to the same
type.
Caution: multivariate data-driven bandwidth selection methods are, by
their nature, computationally intensive. Virtually all methods
require dropping the ith observation from the data set, computing an
object, repeating this for all observations in the sample, then
averaging each of these leave-one-out estimates for a given
value of the bandwidth vector, and only then repeating this a large
number of times in order to conduct multivariate numerical
minimization/maximization. Furthermore, due to the potential for local
minima/maxima, restarting this procedure a large number of times may
often be necessary. This can be frustrating for users possessing
large datasets. For exploratory purposes, you may wish to override the
default search tolerances, say, setting ftol=.01 and tol=.01 and
conduct multistarting (the default is to restart min(2, ncol(xdat,ydat))
times) as is done for a number of examples. Once the procedure
terminates, you can restart search with default tolerances using those
bandwidths obtained from the less rigorous search (i.e., set
bws=bw on subsequent calls to this routine where bw is
the initial bandwidth object). A version of this package using the
Rmpi wrapper is under development that allows one to deploy
this software in a clustered computing environment to facilitate
computation involving large datasets.
Tristen Hayfield tristen.hayfield@gmail.com, Jeffrey S. Racine racinej@mcmaster.ca
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method,” Biometrika, 63, 413-420.
Hall, P. and J.S. Racine and Q. Li (2004), “Cross-validation and the estimation of conditional probability densities,” Journal of the American Statistical Association, 99, 1015-1026.
Hall, P. and J.S. Racine (2015), “Infinite Order Cross-Validated Local Polynomial Regression,” Journal of Econometrics, 185, 510-525.
Li, Q. and J.S. Racine (2007), Nonparametric Econometrics: Theory and Practice, Princeton University Press.
Li, Q. and J.S. Racine (2008), “Nonparametric estimation of conditional CDF and quantile functions with mixed categorical and continuous data,” Journal of Business and Economic Statistics, 26, 423-434.
Li, Q. and J. Lin and J.S. Racine (2013), “Optimal bandwidth selection for nonparametric conditional distribution and quantile functions”, Journal of Business and Economic Statistics, 31, 57-65.
Li, A. and Q. Li and J.S. Racine (under revision), “Boundary Adjusted, Polynomial Adaptive, Nonparametric Kernel Conditional Density Estimation,” Econometric Reviews.
Pagan, A. and A. Ullah (1999), Nonparametric Econometrics, Cambridge University Press.
Scott, D.W. (1992), Multivariate Density Estimation. Theory, Practice and Visualization, New York: Wiley.
Silverman, B.W. (1986), Density Estimation, London: Chapman and Hall.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions,” Biometrika, 68, 301-309.
np.kernels, np.options, plot
bw.nrd, bw.SJ, hist,
npudens, npudist
## Not run:
# EXAMPLE 1 (INTERFACE=FORMULA): For this example, we compute the
# cross-validated bandwidths (default) using a second-order Gaussian
# kernel (default). Note - this may take a minute or two depending on
# the speed of your computer.
data("Italy")
Italy <- Italy[seq_len(min(300, nrow(Italy))), ]
attach(Italy)
bw <- npcdistbw(formula=gdp~ordered(year), nmulti=1)
# The object bw can be used for further estimation using
# npcdist(), plotting using plot() etc. Entering the name of
# the object provides useful summary information, and names() will also
# provide useful information.
summary(bw)
# Note - see the example for npudensbw() for multiple illustrations
# of how to change the kernel function, kernel order, and so forth.
detach(Italy)
# EXAMPLE 1 (INTERFACE=DATA FRAME): For this example, we compute the
# cross-validated bandwidths (default) using a second-order Gaussian
# kernel (default). Note - this may take a minute or two depending on
# the speed of your computer.
data("Italy")
Italy <- Italy[seq_len(min(300, nrow(Italy))), ]
attach(Italy)
bw <- npcdistbw(xdat=ordered(year), ydat=gdp, nmulti=1)
# The object bw can be used for further estimation using npcdist(),
# plotting using plot() etc. Entering the name of the object provides
# useful summary information, and names() will also provide useful
# information.
summary(bw)
# Note - see the example for npudensbw() for multiple illustrations
# of how to change the kernel function, kernel order, and so forth.
detach(Italy)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.