Description Usage Arguments Details Value References See Also Examples
Function used to fit a QRNN model or ensemble of QRNN models.
1 2 3 4 5 6 
x 
covariate matrix with number of rows equal to the number of samples and number of columns equal to the number of variables. 
y 
response column matrix with number of rows equal to the number of samples. 
n.hidden 
number of hidden nodes in the QRNN model. 
w 
vector of weights with length equal to the number of samples;

tau 
desired tauquantile(s). 
n.ensemble 
number of ensemble members to fit. 
iter.max 
maximum number of iterations of the optimization algorithm. 
n.trials 
number of repeated trials used to avoid local minima. 
bag 
logical variable indicating whether or not bootstrap aggregation (bagging) should be used. 
lower 
left censoring point. 
init.range 
initial weight range for inputhidden and hiddenoutput weight matrices. 
monotone 
column indices of covariates for which the monotonicity constraint should hold. 
additive 
force additive relationships. 
eps.seq 
sequence of 
Th 
hidden layer transfer function; use 
Th.prime 
derivative of the hidden layer transfer function 
penalty 
weight penalty for weight decay regularization. 
unpenalized 
column indices of covariates for which the weight penalty should not be applied to inputhidden layer weights. 
n.errors.max 
maximum number of 
trace 
logical variable indicating whether or not diagnostic messages are printed during optimization. 
... 
additional parameters passed to the 
Fit a censored quantile regression neural network model for the
tau
quantile by minimizing a cost function based on smooth
Hubernorm approximations to the tilted absolute value and ramp functions.
Left censoring can be turned on by setting lower
to a value
greater than Inf
. A simplified form of the finite smoothing
algorithm, in which the nlm
optimization algorithm
is run with values of the eps
approximation tolerance progressively
reduced in magnitude over the sequence eps.seq
, is used to set the
QRNN weights and biases. Local minima of the cost function can be
avoided by setting n.trials
, which controls the number of
repeated runs from different starting weights and biases, to a value
greater than one.
(Note: if eps.seq
is set to a single, sufficiently large value and tau
is set to 0.5
, then the result will be a standard least squares
regression model. The same value of eps.seq
and other values
of tau
leads to expectile regression.)
The hidden layer transfer function Th
and its derivative
Th.prime
should be set to sigmoid
,
elu
, or softplus
and
sigmoid.prime
, elu.prime
, or softplus.prime
for a nonlinear model and to linear
and
linear.prime
for a linear model.
If invoked, the monotone
argument enforces nondecreasing behaviour
between specified columns of x
and model outputs. This holds if
Th
and To
are monotone nondecreasing functions. In this case,
the exp
function is applied to the relevant weights following
initialization and during optimization; manual adjustment of
init.weights
or qrnn.initialize
may be needed due to
differences in scaling of the constrained and unconstrained weights.
Nonincreasing behaviour can be forced by transforming the relevant
covariates, e.g., by reversing sign.
The additive
argument sets relevant inputhidden layer weights
to zero, resulting in a purely additive model. Interactions between covariates
are thus suppressed, leading to a compromise in flexibility between
linear quantile regression and the quantile regression neural network.
Borrowing strength by using a composite model for multiple regression quantiles
is also possible (see composite.stack
). Applying the monotone
constraint in combination with the composite model allows
one to simultaneously estimate multiple noncrossing quantiles;
the resulting monotone composite QRNN (MCQRNN) is demonstrated in
mcqrnn
.
In the linear case, model complexity does not depend on the number
of hidden nodes; the value of n.hidden
is ignored and is instead
set to one internally. In the nonlinear case, n.hidden
controls the overall complexity of the model. As an added means of
avoiding overfitting, weight penalty regularization for the magnitude
of the inputhidden layer weights (excluding biases) can be applied
by setting penalty
to a nonzero value. (For the linear model,
this penalizes both inputhidden and hiddenoutput layer weights,
leading to a quantile ridge regression model. In this case, kernel
quantile ridge regression can be performed with the aid of the
qrnn.rbf
function.) Finally, if the bag
argument
is set to TRUE
, models are trained on bootstrapped x
and
y
sample pairs; bootstrap aggregation (bagging) can be turned
on by setting n.ensemble
to a value greater than one. Averaging
over an ensemble of bagged models will also tend to alleviate
overfitting.
The gam.style
function can be used to plot modified
generalized additive model effects plots, which are useful for visualizing
the modelled covariateresponse relationships.
Note: values of x
and y
need not be standardized or
rescaled by the user. All variables are automatically scaled to zero
mean and unit standard deviation prior to fitting and parameters are
automatically rescaled by qrnn.predict
and other prediction
functions. Values of eps.seq
are relative to the residuals in
standard deviation units.
a list containing elements
weights 
a list containing fitted weight matrices 
lower 
left censoring point 
eps.seq 
sequence of 
tau 
desired tauquantile(s) 
Th 
hidden layer transfer function 
x.center 
vector of column means for 
x.scale 
vector of column standard deviations for 
y.center 
vector of column means for 
y.scale 
vector of column standard deviations for 
monotone 
column indices indicating covariate monotonicity constraints. 
additive 
force additive relationships. 
Cannon, A.J., 2011. Quantile regression neural networks: implementation in R and application to precipitation downscaling. Computers & Geosciences, 37: 12771284. doi:10.1016/j.cageo.2010.07.005
Cannon, A.J., 2017. Noncrossing nonlinear regression quantiles by monotone composite quantile regression neural network, with application to rainfall extremes. EarthArXiv <https://eartharxiv.org/wg7sn>. doi:10.17605/OSF.IO/WG7SN
qrnn.predict
, qrnn.cost
, composite.stack
, mcqrnn
, gam.style
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34  x < as.matrix(iris[,"Petal.Length",drop=FALSE])
y < as.matrix(iris[,"Petal.Width",drop=FALSE])
cases < order(x)
x < x[cases,,drop=FALSE]
y < y[cases,,drop=FALSE]
tau < c(0.05, 0.5, 0.95)
set.seed(1)
## QRNN models for conditional 5th, 50th, and 95th percentiles
w < p < vector("list", length(tau))
for(i in seq_along(tau)){
w[[i]] < qrnn.fit(x=x, y=y, n.hidden=3, tau=tau[i],
iter.max=200, n.trials=1)
p[[i]] < qrnn.predict(x, w[[i]])
}
## Monotone composite QRNN (MCQRNN) for simultaneous estimation of
## multiple noncrossing quantile functions
x.y.tau < composite.stack(x, y, tau)
fit.mcqrnn < qrnn.fit(cbind(x.y.tau$tau, x.y.tau$x), x.y.tau$y,
tau=x.y.tau$tau, n.hidden=3, n.trials=1,
iter.max=500, monotone=1)
pred.mcqrnn < matrix(qrnn.predict(cbind(x.y.tau$tau, x.y.tau$x),
fit.mcqrnn), ncol=length(tau))
par(mfrow=c(1, 2))
matplot(x, matrix(unlist(p), nrow=nrow(x), ncol=length(p)), col="red",
type="l")
points(x, y)
matplot(x, pred.mcqrnn, col="blue", type="l")
points(x, y)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.