| npscoefbw | R Documentation |
npscoefbw computes a bandwidth object for a smooth
coefficient kernel regression estimate of a one (1) dimensional
dependent variable on
p+q-variate explanatory data, using the model
Y_i = W_{i}^{\prime} \gamma (Z_i) + u_i where W_i'=(1,X_i')
given training points (consisting of explanatory data and dependent
data), and a bandwidth specification, which can be a rbandwidth
object, or a bandwidth vector, bandwidth type and kernel type.
npscoefbw(...)
## S3 method for class 'formula'
npscoefbw(formula,
data,
subset,
na.action,
call,
...)
## Default S3 method:
npscoefbw(xdat = stop("invoked without data 'xdat'"),
ydat = stop("invoked without data 'ydat'"),
zdat = NULL,
bws,
backfit.iterate,
backfit.maxiter,
backfit.tol,
bandwidth.compute = TRUE,
basis,
bernstein.basis,
bwmethod,
bwscaling,
bwtype,
ckerbound,
ckerlb,
ckerorder,
ckertype,
ckerub,
cv.iterate,
cv.num.iterations,
degree,
degree.select = c("manual", "coordinate", "exhaustive"),
search.engine = c("nomad+powell", "cell", "nomad"),
nomad = FALSE,
nomad.nmulti = 0L,
degree.min = NULL,
degree.max = NULL,
degree.start = NULL,
degree.restarts = 0L,
degree.max.cycles = 20L,
degree.verify = FALSE,
nmulti,
okertype,
optim.abstol,
optim.maxattempts,
optim.maxit,
optim.method,
optim.reltol,
random.seed,
regtype,
ukertype,
scale.factor.init.lower = 0.1,
scale.factor.init.upper = 2.0,
scale.factor.init = 0.5,
lbd.init = 0.5,
hbd.init = 1.5,
dfac.init = 1.0,
scale.factor.search.lower = NULL,
...)
## S3 method for class 'scbandwidth'
npscoefbw(xdat = stop("invoked without data 'xdat'"),
ydat = stop("invoked without data 'ydat'"),
zdat = NULL,
bws,
backfit.iterate = FALSE,
backfit.maxiter = 100,
backfit.tol = .Machine$double.eps,
bandwidth.compute = TRUE,
cv.iterate = FALSE,
cv.num.iterations = 1,
nmulti,
optim.abstol = .Machine$double.eps,
optim.maxattempts = 10,
optim.maxit = 500,
optim.method = c("Nelder-Mead", "BFGS", "CG"),
optim.reltol = sqrt(.Machine$double.eps),
random.seed = 42,
scale.factor.init.lower = 0.1,
scale.factor.init.upper = 2.0,
scale.factor.init = 0.5,
lbd.init = 0.5,
hbd.init = 1.5,
dfac.init = 1.0,
scale.factor.search.lower = NULL,
...)
These arguments identify the smooth-coefficient data, formula interface, and whether bandwidths are supplied or computed.
bandwidth.compute |
a logical value which specifies whether to do a numerical search for
bandwidths or not. If set to |
bws |
a bandwidth specification. This can be set as a |
call |
the original function call. This is passed internally by
|
data |
an optional data frame, list or environment (or object
coercible to a data frame by |
formula |
a symbolic description of variables on which bandwidth selection is to be performed. The details of constructing a formula are described below. |
na.action |
a function which indicates what should happen when the data contain
|
subset |
an optional vector specifying a subset of observations to be used in the fitting process. |
xdat |
a |
ydat |
a one (1) dimensional numeric or integer vector of dependent data, each
element |
zdat |
an optionally specified |
These arguments control automatic local-polynomial degree search.
degree.max |
optional scalar or integer vector giving upper bounds for automatic
degree search when |
degree.max.cycles |
positive integer giving the maximum number of coordinate-search
sweeps over the degree vector. Ignored for |
degree.min |
optional scalar or integer vector giving lower bounds for automatic
degree search when |
degree.restarts |
non-negative integer giving the number of additional deterministic
coordinate-search restarts. Ignored for |
degree.select |
character string controlling local-polynomial degree handling when
|
degree.start |
optional starting degree vector for automatic coordinate search. If
omitted, the search starts from the degree-zero local-constant
baseline on the continuous |
degree.verify |
logical value indicating whether a coordinate-search solution should
be exhaustively verified over the admissible degree grid after the
heuristic phase completes. Available only for
|
These controls tune the optional smooth-coefficient backfitting iterations.
backfit.iterate |
boolean value specifying whether or not to iterate evaluations of
the smooth coefficient estimator, for extra accuracy, during the
cross-validated backfitting procedure. Defaults to |
backfit.maxiter |
integer specifying the maximum number of times to iterate the
evaluation of the smooth coefficient estimator in the attempt to
obtain the desired accuracy. Defaults to |
backfit.tol |
tolerance to determine convergence of iterated evaluations of the
smooth coefficient estimator. Defaults to |
These arguments choose the selection criterion and the way continuous bandwidths are represented.
bwmethod |
which method was used to select bandwidths. |
bwscaling |
a logical value that when set to |
bwtype |
character string used for the continuous variable bandwidth type,
specifying the type of bandwidth provided. Defaults to
|
These controls set categorical search starts.
dfac.init |
deterministic fixed-bandwidth start factor for ordered and
unordered categorical coordinates. Used only when
|
hbd.init |
upper bound for random fixed-bandwidth start factors for ordered
and unordered categorical coordinates. Used only when
|
lbd.init |
lower bound for random fixed-bandwidth start factors for ordered
and unordered categorical coordinates. Used only when
|
These controls choose and parameterize bounded support for continuous kernels.
ckerbound |
character string controlling continuous-kernel support handling.
Can be set as |
ckerlb |
numeric scalar/vector of lower bounds for continuous variables used
when |
ckerub |
numeric scalar/vector of upper bounds for continuous variables used
when |
These controls define deterministic and random continuous scale-factor starts and the lower admissibility floor for fixed-bandwidth search.
scale.factor.init |
deterministic initial scale factor for continuous fixed-bandwidth
search. Defaults to |
scale.factor.init.lower |
lower endpoint for random continuous scale-factor starts. Defaults
to |
scale.factor.init.upper |
upper endpoint for random continuous scale-factor starts. Defaults
to |
scale.factor.search.lower |
optional nonnegative scalar giving the hard lower admissibility
bound for continuous fixed-bandwidth search candidates. Defaults to
|
These controls tune iterative cross-validation behavior.
cv.iterate |
boolean value specifying whether or not to perform iterative,
cross-validated backfitting on the data. See details for limitations
of the backfitting procedure. Defaults to |
cv.num.iterations |
integer specifying the number of times to iterate the backfitting
process over all covariates. Defaults to |
These controls choose continuous, unordered, and ordered kernels.
ckerorder |
numeric value specifying kernel order (one of
|
ckertype |
character string used to specify the continuous kernel type.
Can be set as |
okertype |
character string used to specify the ordered categorical kernel type.
Can be set as |
ukertype |
character string used to specify the unordered categorical kernel type.
Can be set as |
These arguments control the local-polynomial estimator, basis, and fixed degree specification.
basis |
for |
bernstein.basis |
for |
degree |
for |
regtype |
a character string specifying local smoothing type for the |
These arguments control the optional NOMAD direct-search route for local-polynomial degree and bandwidth search.
nomad |
logical shortcut for the recommended automatic local-polynomial
NOMAD route. When |
nomad.nmulti |
non-negative integer controlling the inner
|
search.engine |
character string controlling the automatic local-polynomial search
backend when |
These controls set search restart behavior.
nmulti |
integer number of times to restart the process of finding extrema of
the cross-validation function from different (random) initial
points. Defaults to |
These arguments control outer optimization behavior for the semiparametric search.
optim.abstol |
the absolute convergence tolerance used by |
optim.maxattempts |
maximum number of attempts taken trying to achieve successful
convergence in |
optim.maxit |
maximum number of iterations used by |
optim.method |
method used by the default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for non-differentiable functions. method method |
optim.reltol |
relative convergence tolerance used by |
random.seed |
an integer used to seed R's random number generator. This ensures replicability of the numerical search. Defaults to 42. |
These arguments collect remaining controls passed through S3 methods.
... |
additional arguments supplied to specify the regression type, bandwidth type, kernel types, selection methods, and so on, detailed below. |
The scale.factor.* controls are dimensionless search
controls. The package converts scale factors to bandwidths using the
estimator-specific scaling encoded in the bandwidth object, including
kernel order and the number of continuous variables relevant for the
estimator. Users should not pre-multiply these controls by sample-size
or standard-deviation factors.
scale.factor.init controls the deterministic first search
start when that control is exposed. scale.factor.init.lower
and scale.factor.init.upper define the random multistart
interval when exposed. scale.factor.search.lower is the lower
admissibility bound for continuous fixed-bandwidth search candidates.
The effective first start is max(scale.factor.init,
scale.factor.search.lower) when both controls are present, and the
effective random-start lower endpoint is
max(scale.factor.init.lower, scale.factor.search.lower).
scale.factor.init.upper must be at least that effective lower
endpoint; the package errors rather than silently expanding the user's
interval.
When scale.factor.search.lower is NULL, an existing
bandwidth object's stored floor is inherited when available;
otherwise the package default 0.1 is used. Explicit bandwidths
supplied for storage with bandwidth.compute = FALSE are not
rewritten by the search floor.
Categorical search-start controls such as dfac.init,
lbd.init, and hbd.init have separate semantics and are
not affected by scale.factor.search.lower.
Documentation guide: see np.kernels for kernels, np.options for global options, and plot for plotting options.
For S3 plotting help, use methods("plot") and query
class-specific help topics such as ?plot.npregression and
?plot.rbandwidth. You can inspect implementations with
getS3method("plot","npregression").
npscoefbw implements a variety of methods for semiparametric
regression on multivariate (p+q-variate) explanatory data defined
over a set of possibly continuous data. The approach is based on Li and
Racine (2003) who employ ‘generalized product kernels’ that
admit a mix of continuous and discrete data types.
Three classes of kernel estimators for the continuous data types are
available: fixed, adaptive nearest-neighbor, and generalized
nearest-neighbor. Adaptive nearest-neighbor bandwidths change with
each sample realization in the set, x_i, when estimating the
density at the point x. Generalized nearest-neighbor bandwidths change
with the point at which the density is estimated, x. Fixed bandwidths
are constant over the support of x.
npscoefbw may be invoked either with a formula-like
symbolic description of variables on which bandwidth selection is to be
performed or through a simpler interface whereby data is passed
directly to the function via the xdat, ydat, and
zdat parameters. Use of these two interfaces is mutually
exclusive.
Data contained in the data frame xdat may be continuous and in
zdat may be of mixed type. Data can be entered in an arbitrary
order and data types will be detected automatically by the routine (see
np for details).
Data for which bandwidths are to be estimated may be specified
symbolically. A typical description has the form dependent
data ~ parametric explanatory data
| nonparametric explanatory data, where
dependent data is a univariate response, and
parametric explanatory data and
nonparametric explanatory data are both series of
variables specified by name, separated by the separation character
'+'. For example, y1 ~ x1 + x2 | z1 specifies that the
bandwidth object for the smooth coefficient model with response
y1, linear parametric regressors x1 and x2, and
nonparametric regressor (that is, the slope-changing variable)
z1 is to be estimated. See below for further examples. In the
case where the nonparametric (slope-changing) variable is not
specified, it is assumed to be the same as the parametric variable.
A variety of kernels may be specified by the user. Kernels implemented for continuous data types include the second, fourth, sixth, and eighth order Gaussian and Epanechnikov kernels, and the uniform kernel. Unordered discrete data types use a variation on Aitchison and Aitken's (1976) kernel, while ordered data types use a variation of the Wang and van Ryzin (1981) kernel.
Setting nomad=TRUE is a convenience preset for this automatic
LP route, not a generic optimizer alias. For smooth coefficient
regression it expands any missing values to the equivalent long-form
call
npscoefbw(...,
regtype = "lp",
search.engine = "nomad+powell",
degree.select = "coordinate",
bernstein.basis = TRUE,
degree.min = 0L,
degree.max = 10L,
degree.verify = FALSE,
bwtype = "fixed")
Compatible explicit tuning arguments are respected. Incompatible explicit settings fail fast so the shortcut never silently changes user-selected semantics.
When regtype="lp" and degree.select != "manual",
npscoefbw can jointly determine the zdat-side local
polynomial degree vector together with the associated bandwidth
coordinates. With search.engine="cell", the criterion is
profiled over the admissible degree grid using cached
coordinate-wise or exhaustive search together with repeated
fixed-degree bandwidth solves. With search.engine="nomad" or
"nomad+powell", the criterion is optimized directly over the
joint degree/bandwidth space using crs::snomadr();
"nomad+powell" then performs one Powell hot start from the
NOMAD solution and keeps the better of the direct NOMAD and polished
answers. This polynomial-adaptive joint-search route is motivated by
Hall and Racine (2015). When bernstein.basis is not explicitly
supplied, the automatic search route defaults to
bernstein.basis=TRUE for numerical stability.
if bwtype is set to fixed, an object containing
bandwidths (or scale factors if bwscaling = TRUE) is
returned. If it is set to generalized_nn or adaptive_nn,
then instead the kth nearest neighbors are returned for the
continuous variables while the discrete kernel bandwidths are returned
for the discrete variables. Bandwidths are stored in a vector under the
component name bw. Backfitted bandwidths are stored under the
component name bw.fitted.
The functions predict, summary, and
plot support
objects of this class.
If you are using data of mixed types, then it is advisable to use the
data.frame function to construct your input data and not
cbind, since cbind will typically not work as
intended on mixed data types and will coerce the data to the same
type.
Caution: multivariate data-driven bandwidth selection methods are, by
their nature, computationally intensive. Virtually all methods
require dropping the ith observation from the data set,
computing an object, repeating this for all observations in the
sample, then averaging each of these leave-one-out estimates for a
given value of the bandwidth vector, and only then repeating
this a large number of times in order to conduct multivariate
numerical minimization/maximization. Furthermore, due to the potential
for local minima/maxima, restarting this procedure a large
number of times may often be necessary. This can be frustrating for
users possessing large datasets. For exploratory purposes, you may
wish to override the default search tolerances, say, setting
optim.reltol=.1 and conduct multistarting (the default is to restart
min(2,ncol(zdat)) times). Once the procedure terminates, you can restart
search with default tolerances using those bandwidths obtained from
the less rigorous search (i.e., set bws=bw on subsequent calls
to this routine where bw is the initial bandwidth object). A
version of this package using the Rmpi wrapper is under
development that allows one to deploy this software in a clustered
computing environment to facilitate computation involving large
datasets.
Support for backfitted bandwidths is experimental and is limited in functionality. The code does not support asymptotic standard errors or out of sample estimates with backfitting.
Tristen Hayfield tristen.hayfield@gmail.com, Jeffrey S. Racine racinej@mcmaster.ca
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method,” Biometrika, 63, 413-420.
Cai Z. (2007), “Trending time-varying coefficient time series models with serially correlated errors,” Journal of Econometrics, 136, 163-188.
Hastie, T. and R. Tibshirani (1993), “Varying-coefficient models,” Journal of the Royal Statistical Society, B 55, 757-796.
Hall, P. and J.S. Racine (2015), “Infinite Order Cross-Validated Local Polynomial Regression,” Journal of Econometrics, 185, 510-525.
Li, Q. and J.S. Racine (2007), Nonparametric Econometrics: Theory and Practice, Princeton University Press.
Li, Q. and J.S. Racine (2010), “Smooth varying-coefficient estimation and inference for qualitative and quantitative data,” Econometric Theory, 26, 1-31.
Pagan, A. and A. Ullah (1999), Nonparametric Econometrics, Cambridge University Press.
Li, Q. and D. Ouyang and J.S. Racine (2013), “Categorical semiparametric varying-coefficient models,” Journal of Applied Econometrics, 28, 551-589.
Li, A. and Q. Li and J.S. Racine (under revision), “Boundary Adjusted, Polynomial Adaptive, Nonparametric Kernel Conditional Density Estimation,” Econometric Reviews.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions,” Biometrika, 68, 301-309.
np.kernels, np.options, plot
npregbw, npreg
## Not run:
# EXAMPLE 1 (INTERFACE=FORMULA):
set.seed(42)
n <- 100
x <- runif(n)
z <- runif(n, min=-2, max=2)
y <- x*exp(z)*(1.0+rnorm(n,sd = 0.2))
bw <- npscoefbw(formula=y~x|z)
summary(bw)
# EXAMPLE 1 (INTERFACE=DATA FRAME):
n <- 100
x <- runif(n)
z <- runif(n, min=-2, max=2)
y <- x*exp(z)*(1.0+rnorm(n,sd = 0.2))
bw <- npscoefbw(xdat=x, ydat=y, zdat=z)
summary(bw)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.