Description Usage Arguments Value Author(s) References Examples
The learning rate (sigma) of the Gibbs posterior is tuned either by calibrating the credible intervals for the fitted curve, or by minimizing the pinball loss on out-of-sample data. This is done by bootrapping or by k-fold cross-validation. Here the calibration loss function is evaluated on a grid of values provided by the user.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
form |
A GAM formula, or a list of formulae. See ?mgcv::gam details. |
data |
A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from environment(formula): typically the environment from which gam is called. |
lsig |
A vector of value of the log learning rate (log(sigma)) over which the calibration loss function is evaluated. |
qu |
The quantile of interest. Should be in (0, 1). |
err |
An upper bound on the error of the estimated quantile curve. Should be in (0, 1).
Since qgam v1.3 it is selected automatically, using the methods of Fasiolo et al. (2017).
The old default was |
multicore |
If TRUE the calibration will happen in parallel. |
cluster |
An object of class |
ncores |
Number of cores used. Relevant if |
paropts |
a list of additional options passed into the foreach function when parallel computation is enabled. This is important if (for example) your code relies on external data or packages: use the .export and .packages arguments to supply them so that all cluster nodes have the correct environment set up for computing. |
control |
A list of control parameters for
|
argGam |
A list of parameters to be passed to |
A list with entries:
lsig
= the value of log(sigma) resulting in the lowest loss.
loss
= vector containing the value of the calibration loss function corresponding
to each value of log(sigma).
edf
= a matrix where the first colums contain the log(sigma) sequence, and the remaining
columns contain the corresponding effective degrees of freedom of each smooth.
convProb
= a logical vector indicating, for each value of log(sigma), whether the outer
optimization which estimates the smoothing parameters has encountered convergence issues.
FALSE
means no problem.
Matteo Fasiolo <matteo.fasiolo@gmail.com>.
Fasiolo, M., Wood, S.N., Zaffran, M., Nedellec, R. and Goude, Y., 2020. Fast calibrated additive quantile regression. Journal of the American Statistical Association (to appear). https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1725521.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | library(qgam); library(MASS)
# Calibrate learning rate on a grid
set.seed(41444)
sigSeq <- seq(1.5, 5, length.out = 10)
closs <- tuneLearn(form = accel~s(times,k=20,bs="ad"),
data = mcycle,
lsig = sigSeq,
qu = 0.5)
plot(sigSeq, closs$loss, type = "b", ylab = "Calibration Loss", xlab = "log(sigma)")
# Pick best log-sigma
best <- sigSeq[ which.min(closs$loss) ]
abline(v = best, lty = 2)
# Fit using the best sigma
fit <- qgam(accel~s(times,k=20,bs="ad"), data = mcycle, qu = 0.5, lsig = best)
summary(fit)
pred <- predict(fit, se=TRUE)
plot(mcycle$times, mcycle$accel, xlab = "Times", ylab = "Acceleration",
ylim = c(-150, 80))
lines(mcycle$times, pred$fit, lwd = 1)
lines(mcycle$times, pred$fit + 2*pred$se.fit, lwd = 1, col = 2)
lines(mcycle$times, pred$fit - 2*pred$se.fit, lwd = 1, col = 2)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.