Function used to fit a QRNN model or ensemble of QRNN models.

1 2 3 4 5 |

`x` |
covariate matrix with number of rows equal to the number of samples and number of columns equal to the number of variables. |

`y` |
predictand column matrix with number of rows equal to the number of samples. |

`n.hidden` |
number of hidden nodes in the QRNN model. |

`tau` |
desired tau-quantile. |

`n.ensemble` |
number of ensemble members to fit. |

`iter.max` |
maximum number of iterations of the optimization algorithm. |

`n.trials` |
number of repeated trials used to avoid local minima. |

`bag` |
logical variable indicating whether or not bootstrap aggregation (bagging) should be used. |

`lower` |
left censoring point. |

`eps.seq` |
sequence of |

`Th` |
hidden layer transfer function; use |

`Th.prime` |
derivative of the hidden layer transfer function |

`penalty` |
weight penalty for weight decay regularization. |

`trace` |
logical variable indicating whether or not diagnostic messages are printed during optimization. |

`...` |
additional parameters passed to the |

Fit a censored quantile regression neural network model for the
`tau`

-quantile by minimizing a cost function based on the Huber
norm approximation to the tilted absolute value and ramp functions.
Left censoring can be turned on by setting `lower`

to a value
greater than `-Inf`

. A simplified form of the finite smoothing
algorithm, in which the `nlm`

optimization algorithm
is run with values of the Huber norm `eps`

parameter progressively
reduced in magnitude over the sequence `eps.seq`

, is used to set the
QRNN weights and biases. Local minima of the cost function can be
avoided by setting `n.trials`

, which controls the number of
repeated runs from different starting weights and biases, to a value
greater than one.

The hidden layer transfer function `Th`

and its derivative
`Th.prime`

should be set to `sigmoid`

and
`sigmoid.prime`

for a nonlinear model and to
`linear`

and `linear.prime`

for a linear
model.

In the linear case, model complexity does not depend on the number
of hidden nodes; the value of `n.hidden`

is ignored and is instead
set to one internally. In the nonlinear case, `n.hidden`

controls the overall complexity of the model. As an added means of
avoiding overfitting, weight penalty regularization for the magnitude
of the input-hidden layer weights (excluding biases) can be applied
by setting `penalty`

to a nonzero value. (For the linear model,
this penalizes both input-hidden and hidden-output layer weights,
leading to a quantile ridge regression model. In this case, kernel
quantile ridge regression can be performed with the aid of the
`qrnn.rbf`

function.) Finally, if the `bag`

argument
is set to `TRUE`

, models are trained on bootstrapped `x`

and
`y`

sample pairs; bootstrap aggregation (bagging) can be turned
on by setting `n.ensemble`

to a value greater than one. Averaging
over an ensemble of bagged models will also tend to alleviate
overfitting.

Note: values of `x`

and `y`

need not be standardized or
rescaled by the user. All variables are automatically scaled to zero
mean and unit standard deviation prior to fitting and parameters are
automatically rescaled by `qrnn.predict`

. Values of
`eps.seq`

are relative to the residuals in standard deviation
units.

a list containing elements

`weights` |
a list containing fitted weight matrices |

`lower` |
left censoring point |

`eps.seq` |
sequence of |

`tau` |
desired tau-quantile |

`Th` |
hidden layer transfer function |

`x.center` |
vector of column means for |

`x.scale` |
vector of column standard deviations for |

`y.center` |
vector of column means for |

`y.scale` |
vector of column standard deviations for |

`qrnn.predict`

, `qrnn.nlm`

, `qrnn.cost`

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | ```
data(sinc)
x <- sinc$x
y <- sinc$y
q <- sinc$tau
probs <- c(0.05, 0.50, 0.95)
## Fit QRNN models for 5th, 50th, and 95th percentiles
set.seed(1)
w <- p <- list()
for(i in seq_along(probs)){
w[[i]] <- qrnn.fit(x = x, y = y, n.hidden = 4, tau = probs[i],
iter.max = 1000, n.trials = 1)
p[[i]] <- qrnn.predict(x, w[[i]])
}
plot(x, y, ylim = range(pretty(c(y, q))))
matlines(x, q, lwd = 2)
matlines(x, matrix(unlist(p), nrow = nrow(x), ncol = length(p)))
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

All documentation is copyright its authors; we didn't write any of that.