nlrq: Function to compute nonlinear quantile regression estimates

Description Usage Arguments Details Value Author(s) References See Also Examples

Description

This function implements an R version of an interior point method for computing the solution to quantile regression problems which are nonlinear in the parameters. The algorithm is based on interior point ideas described in Koenker and Park (1994).

Usage

1
2
3
4
5
6
nlrq(formula, data=parent.frame(), start, tau=0.5, 
	control, trace=FALSE,method="L-BFGS-B")
## S3 method for class 'nlrq'
summary(object, ...)
## S3 method for class 'summary.nlrq'
print(x, digits = max(5, .Options$digits - 2), ...)

Arguments

formula

formula for model in nls format; accept self-starting models

data

an optional data frame in which to evaluate the variables in ‘formula’

start

a named list or named numeric vector of starting estimates

tau

a vector of quantiles to be estimated

control

an optional list of control settings. See ‘nlrq.control’ for the names of the settable control values and their effect.

trace

logical value indicating if a trace of the iteration progress should be printed. Default is ‘FALSE’. If ‘TRUE’ intermediary results are printed at the end of each iteration.

method

method passed to optim for line search, default is "L-BFGS-B" but for some problems "BFGS" may be preferable. See optim for further details. Note that the algorithm wants to pass upper and lower bounds for the line search to optim, which is fine for the L-BFGS-B method. Use of other methods will produce warnings about these arguments – so users should proceed at their own risk.

object

an object of class nlrq needing summary.

x

an object of class summary.nlrq needing printing.

digits

Significant digits reported in the printed table.

...

Optional arguments passed to printing function.

Details

An ‘nlrq’ object is a type of fitted model object. It has methods for the generic functions ‘coef’ (parameters estimation at best solution), ‘formula’ (model used), ‘deviance’ (value of the objective function at best solution), ‘print’, ‘summary’, ‘fitted’ (vector of fitted variable according to the model), ‘predict’ (vector of data points predicted by the model, using a different matrix for the independent variables) and also for the function ‘tau’ (quantile used for fitting the model, as the tau argument of the function). Further help is also available for the method ‘residuals’. The summary method for nlrq uses a bootstrap approach based on the final linearization of the model evaluated at the estimated parameters.

Value

A list consisting of:

m

an ‘nlrqModel’ object similar to an ‘nlsModel’ in package nls

data

the expression that was passed to ‘nlrq’ as the data argument. The actual data values are present in the environment of the ‘m’ component.

Author(s)

Based on S code by Roger Koenker modified for R and to accept models as specified by nls by Philippe Grosjean.

References

Koenker, R. and Park, B.J. (1994). An Interior Point Algorithm for Nonlinear Quantile Regression, Journal of Econometrics, 71(1-2): 265-283.

See Also

nlrq.control , residuals.nlrq

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# build artificial data with multiplicative error
Dat <- NULL; Dat$x <- rep(1:25, 20)
set.seed(1)
Dat$y <- SSlogis(Dat$x, 10, 12, 2)*rnorm(500, 1, 0.1)
plot(Dat)
# fit first a nonlinear least-square regression
Dat.nls <- nls(y ~ SSlogis(x, Asym, mid, scal), data=Dat); Dat.nls
lines(1:25, predict(Dat.nls, newdata=list(x=1:25)), col=1)
# then fit the median using nlrq
Dat.nlrq <- nlrq(y ~ SSlogis(x, Asym, mid, scal), data=Dat, tau=0.5, trace=TRUE)
lines(1:25, predict(Dat.nlrq, newdata=list(x=1:25)), col=2)
# the 1st and 3rd quartiles regressions
Dat.nlrq <- nlrq(y ~ SSlogis(x, Asym, mid, scal), data=Dat, tau=0.25, trace=TRUE)
lines(1:25, predict(Dat.nlrq, newdata=list(x=1:25)), col=3)
Dat.nlrq <- nlrq(y ~ SSlogis(x, Asym, mid, scal), data=Dat, tau=0.75, trace=TRUE)
lines(1:25, predict(Dat.nlrq, newdata=list(x=1:25)), col=3)
# and finally "external envelopes" holding 95 percent of the data
Dat.nlrq <- nlrq(y ~ SSlogis(x, Asym, mid, scal), data=Dat, tau=0.025, trace=TRUE)
lines(1:25, predict(Dat.nlrq, newdata=list(x=1:25)), col=4)
Dat.nlrq <- nlrq(y ~ SSlogis(x, Asym, mid, scal), data=Dat, tau=0.975, trace=TRUE)
lines(1:25, predict(Dat.nlrq, newdata=list(x=1:25)), col=4)
leg <- c("least squares","median (0.5)","quartiles (0.25/0.75)",".95 band (0.025/0.975)")
legend(1, 12.5, legend=leg, lty=1, col=1:4)

Example output

Loading required package: SparseM

Attaching package: 'SparseM'

The following object is masked from 'package:base':

    backsolve

Nonlinear regression model
  model: y ~ SSlogis(x, Asym, mid, scal)
   data: Dat
  Asym    mid   scal 
 9.968 11.947  1.962 
 residual sum-of-squares: 241.8

Number of iterations to convergence: 0 
Achieved convergence tolerance: 6.882e-07
109.059 :   9.968027 11.947208  1.962113 
final  value 108.942725 
converged
lambda = 1 
108.9427 :   9.958648 11.943273  1.967144 
final  value 108.490939 
converged
lambda = 0.9750984 
108.4909 :   9.949430 11.987472  1.998607 
final  value 108.471416 
converged
lambda = 0.9999299 
108.4714 :   9.94163 11.99077  1.99344 
final  value 108.471243 
converged
lambda = 1 
108.4712 :   9.941008 11.990550  1.992921 
final  value 108.470935 
stopped after 4 iterations
lambda = 0.8621249 
108.4709 :   9.942734 11.992773  1.993209 
final  value 108.470923 
converged
lambda = 0.9999613 
108.4709 :   9.942629 11.992728  1.993136 
final  value 108.470919 
converged
lambda = 1 
108.4709 :   9.942644 11.992737  1.993144 
final  value 108.470919 
converged
lambda = 1 
108.4709 :   9.942644 11.992737  1.993144 
final  value 108.470919 
converged
lambda = 1 
108.4709 :   9.942644 11.992737  1.993144 
108.6656 :   9.968027 11.947208  1.962113 
final  value 89.108243 
converged
lambda = 1 
89.10824 :   9.432250 11.803924  1.923472 
final  value 85.688895 
converged
lambda = 1 
85.6889 :   9.183598 11.794244  1.929699 
final  value 85.473712 
converged
lambda = 0.6405076 
85.47371 :   9.212527 11.844090  1.938003 
final  value 85.447786 
converged
lambda = 1 
85.44779 :   9.234097 11.863975  1.949241 
final  value 85.446407 
converged
lambda = 1 
85.44641 :   9.242009 11.866644  1.954192 
final  value 85.445691 
converged
lambda = 1 
85.44569 :   9.234247 11.864554  1.952338 
final  value 85.444920 
converged
lambda = 1 
85.44492 :   9.232975 11.863979  1.953587 
final  value 85.443854 
converged
lambda = 0.363237 
85.44385 :   9.233661 11.864280  1.957197 
final  value 85.443668 
converged
lambda = 0.8495473 
85.44367 :   9.233453 11.860020  1.957831 
final  value 85.443667 
converged
lambda = 0.008522582 
85.44367 :   9.233449 11.860007  1.957814 
final  value 85.443584 
converged
lambda = 1 
85.44358 :   9.232996 11.859020  1.955928 
final  value 85.443586 
converged
lambda = 0.9999957 
85.44359 :   9.232995 11.859024  1.955916 
109.4525 :   9.968027 11.947208  1.962113 
final  value 89.561436 
converged
lambda = 1 
89.56144 :  10.64021 12.13202  2.02044 
final  value 87.302043 
converged
lambda = 1 
87.30204 :  10.652294 11.966018  1.958371 
final  value 87.200715 
converged
lambda = 1 
87.20072 :  10.666754 11.953497  1.962447 
final  value 87.131462 
stopped after 4 iterations
lambda = 0.8659451 
87.13146 :  10.639094 11.949236  1.971242 
final  value 87.125795 
converged
lambda = 0.6273926 
87.1258 :  10.647784 11.962635  1.975851 
final  value 87.122717 
converged
lambda = 0.8041119 
87.12272 :  10.647957 11.963190  1.973657 
final  value 87.121592 
converged
lambda = 1 
87.12159 :  10.649877 11.962363  1.973516 
final  value 87.121427 
converged
lambda = 1 
87.12143 :  10.649051 11.961685  1.973086 
final  value 87.121355 
converged
lambda = 0.5468903 
87.12135 :  10.648209 11.961208  1.972643 
final  value 87.121400 
converged
lambda = 0.9999045 
87.1214 :  10.648073 11.961122  1.972568 
108.3114 :   9.968027 11.947208  1.962113 
final  value 62.166616 
converged
lambda = 1 
62.16662 :   9.432250 11.803924  1.923472 
final  value 16.887325 
converged
lambda = 1 
16.88732 :   8.006640 11.718631  1.979243 
final  value 15.823276 
converged
lambda = 0.7133884 
15.82328 :   8.135460 12.048708  1.987995 
final  value 15.732737 
stopped after 3 iterations
lambda = 0.7726586 
15.73274 :   8.042059 12.019442  1.994386 
final  value 15.732737 
converged
lambda = 0 
15.73274 :   8.042059 12.019442  1.994386 
109.8066 :   9.968027 11.947208  1.962113 
final  value 56.575819 
converged
lambda = 1 
56.57582 :  10.672415 12.148657  2.027285 
final  value 20.551829 
converged
lambda = 1 
20.55183 :  11.923558 12.366710  2.121476 
final  value 17.268734 
converged
lambda = 1 
17.26873 :  12.266850 12.051876  2.060768 
final  value 17.194623 
converged
lambda = 0.5512476 
17.19462 :  12.176373 12.020546  2.003537 
final  value 17.175845 
converged
lambda = 0.900139 
17.17585 :  12.180837 12.005129  2.019783 
final  value 17.175761 
converged
lambda = 0.1504766 
17.17576 :  12.177202 12.003960  2.011709 
final  value 17.175612 
converged
lambda = 1 
17.17561 :  12.18154 12.00534  2.01894 
final  value 17.175603 
converged
lambda = 1 
17.1756 :  12.181679 12.005403  2.019175 
final  value 17.175518 
converged
lambda = 1 
17.17552 :  12.17954 12.00469  2.01453 
final  value 17.175518 
converged
lambda = 0 
17.17552 :  12.17954 12.00469  2.01453 

quantreg documentation built on Feb. 24, 2021, 5:08 p.m.

Related to nlrq in quantreg...