aws.gaussian | R Documentation |
The function implements an semiparametric adaptive weights smoothing algorithm designed for regression with additive heteroskedastic Gaussian noise. The noise variance is assumed to depend on the value of the regression function. This dependence is modeled by a global parametric (polynomial) model.
aws.gaussian(y, hmax = NULL, hpre = NULL, aws = TRUE, memory = FALSE,
varmodel = "Constant", lkern = "Triangle",
aggkern = "Uniform", scorr = 0, mask=NULL, ladjust = 1,
wghts = NULL, u = NULL, varprop = 0.1, graph = FALSE, demo = FALSE)
y |
|
hmax |
|
hpre |
Describe |
aws |
logical: if TRUE structural adaptation (AWS) is used. |
memory |
logical: if TRUE stagewise aggregation is used as an additional adaptation scheme. |
varmodel |
Implemented are "Constant", "Linear" and "Quadratic" refering to a polynomial model of degree 0 to 2. |
lkern |
character: location kernel, either "Triangle", "Plateau", "Quadratic", "Cubic" or "Gaussian". The default "Triangle" is equivalent to using an Epanechnikov kernel, "Quadratic" and "Cubic" refer to a Bi-weight and Tri-weight kernel, see Fan and Gijbels (1996). "Gaussian" is a truncated (compact support) Gaussian kernel. This is included for comparisons only and should be avoided due to its large computational costs. |
aggkern |
character: kernel used in stagewise aggregation, either "Triangle" or "Uniform" |
scorr |
The vector |
mask |
Restrict smoothing to points where |
ladjust |
factor to increase the default value of lambda |
wghts |
|
u |
a "true" value of the regression function, may be provided to
report risks at each iteration. This can be used to test the propagation condition with |
varprop |
Small variance estimates are replaced by |
graph |
If |
demo |
If |
The function implements the propagation separation approach to
nonparametric smoothing (formerly introduced as Adaptive weights smoothing)
for varying coefficient likelihood models on a 1D, 2D or 3D grid.
In contrast to function aws
observations are assumed to follow a Gaussian distribution
with variance depending on the mean according to a specified global variance model.
aws==FALSE
provides the stagewise aggregation procedure from Belomestny and Spokoiny (2004).
memory==FALSE
provides Adaptive weights smoothing without control by stagewise aggregation.
The essential parameter in the procedure is a critical value lambda
. This parameter has an
interpretation as a significance level of a test for equivalence of two local
parameter estimates.
Values set internally are choosen to fulfil a propagation condition, i.e. in case of a
constant (global) parameter value and large hmax
the procedure
provides, with a high probability, the global (parametric) estimate.
More formally we require the parameter lambda
to be specified such that
\bf{E} |\hat{\theta}^k - \theta| \le (1+\alpha) \bf{E} |\tilde{\theta}^k - \theta|
where \hat{\theta}^k
is the aws-estimate in step k
and \tilde{\theta}^k
is corresponding nonadaptive estimate using the same bandwidth (lambda=Inf
).
The value of lambda can be adjusted by specifying the factor ladjust
. Values
ladjust>1
lead to an less effective adaptation while ladjust<<1
may lead
to random segmentation of, with respect to a constant model, homogeneous regions.
The numerical complexity of the procedure is mainly determined by hmax
. The number
of iterations is approximately Const*d*log(hmax)/log(1.25)
with d
being the dimension
of y
and the constant depending on the kernel lkern
. Comlexity in each
iteration step is Const*hakt*n
with hakt
being the actual bandwith
in the iteration step and n
the number of design points.
hmax
determines the maximal possible variance reduction.
returns anobject of class aws
with slots
y = "numeric" |
y |
dy = "numeric" |
dim(y) |
x = "numeric" |
numeric(0) |
ni = "integer" |
integer(0) |
mask = "logical" |
logical(0) |
theta = "numeric" |
Estimates of regression function, |
mae = "numeric" |
Mean absolute error for each iteration step if u was specified, numeric(0) else |
var = "numeric" |
approx. variance of the estimates of the regression function. Please note that this does not reflect variability due to randomness of weights. |
xmin = "numeric" |
numeric(0) |
xmax = "numeric" |
numeric(0) |
wghts = "numeric" |
numeric(0), ratio of distances |
degree = "integer" |
0 |
hmax = "numeric" |
effective hmax |
sigma2 = "numeric" |
provided or estimated error variance |
scorr = "numeric" |
scorr |
family = "character" |
"Gaussian" |
shape = "numeric" |
NULL |
lkern = "integer" |
integer code for lkern, 1="Plateau", 2="Triangle", 3="Quadratic", 4="Cubic", 5="Gaussian" |
lambda = "numeric" |
effective value of lambda |
ladjust = "numeric" |
effective value of ladjust |
aws = "logical" |
aws |
memory = "logical" |
memory |
homogen = "logical" |
homogen |
earlystop = "logical" |
FALSE |
varmodel = "character" |
varmodel |
vcoef = "numeric" |
estimated parameters of the variance model |
call = "function" |
the arguments of the call to |
Joerg Polzehl, polzehl@wias-berlin.de, https://www.wias-berlin.de/people/polzehl/
Joerg Polzehl, Vladimir Spokoiny, Adaptive Weights Smoothing with applications to image restoration, J. R. Stat. Soc. Ser. B Stat. Methodol. 62 , (2000) , pp. 335–354
Joerg Polzehl, Vladimir Spokoiny, Propagation-separation approach for local likelihood estimation, Probab. Theory Related Fields 135 (3), (2006) , pp. 335–362.
Joerg Polzehl, Vladimir Spokoiny, in V. Chen, C.; Haerdle, W. and Unwin, A. (ed.) Handbook of Data Visualization Structural adaptive smoothing by propagation-separation methods Springer-Verlag, 2008, 471-492
See also aws
, link{awsdata}
, aws.irreg
require(aws)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.