summary.hyperblm: Summary Output of Hyperbolic Regression

View source: R/summary.hyperblm.R

summary.hyperblmR Documentation

Summary Output of Hyperbolic Regression

Description

It obtains summary output from class 'hyperblm' object. The summary output incldes the standard error, t-statistics, p values of the coefficients estimates. Also the estimated parameters of hyperbolic error distribution, the maximum likelihood, the stage one optimization method, the two-stage alternating iterations and the convergence code.

Usage

## S3 method for class 'hyperblm'
summary(object, hessian = FALSE,
                           nboots = 1000, ...)

## S3 method for class 'summary.hyperblm'
print(x,
                                 digits = max(3, getOption("digits") - 3), ...)

Arguments

object

An object of class "hyperblm".

x

An object of class "summary.hyperblm" resulting from a call to summary.hyperblm.

hessian

Logical. If is TRUE, the standard error is calculated by the hessian matrix and the also hessian matrix is returned. Otherwise, the standard error is approximated by bootstrapping. See Details.

nboots

Numeric. Number of bootstrap simulations to obtain the bootstrap estimate of parameters standard errors.

digits

Numeric. Desired number of digits when the object is printed.

...

Passes additional arguments to functions bSE, hyperblmhessian.

Details

The function summary.hyperblm provides two approaches to obtain the standard error of parameters due to the fact that approximated hessian matrix is not stable for such complex optimization. The first approach is by approximated hessian matrix. The setting in the argument list is hessian = TRUE. The Hessian matrix is approximated by function tsHessian. However it may not be reliable for some error distribution parameters, for instance, the function obtains negative variance from the Hessian matrix. The second approach is by parametric bootstrapping. The setting in the argument list is hessian = FALSE which is also the default setting. The default number of bootstrap stimulations is 1000, but users can increase this when accuracy has priority over efficiency. Although the bootstrapping is fairly slow, it provides reliable standard errors.

Value

summary.hyperblm returns an object of class summary.hyperblm which is a list containing:

coefficients

A names vector of regression coefficients.

distributionParams

A named vector of fitted hyperbolic error distribution parameters.

fitted.values

The fitted mean values.

residuals

The remaining after subtract fitted values from response.

MLE

The maximum likelihood value of the model.

method

The optimization method for stage one.

paramStart

The start values of parameters that the user specified (only where relevant).

residsParamStart

The start values of parameters returned by hyperbFitStand (only where relevant).

call

The matched call.

terms

The terms object used.

contrasts

The contrasts used (only where relevant).

xlevels

The levels of the factors used in the fitting (only where relevant).

offset

The offset used (only where relevant).

xNames

The names of each explanatory variables. If explanatory variables don't have names then they shall be named x.

yVec

The response vector.

xMatrix

The explanatory variables matrix.

iterations

Number of two-stage alternating iterations to convergency.

convergence

The convergence code for two-stage optimization: 0 if the system converged; 1 if first stage did not converge, 2 if the second stage did not converge, 3 if the both stages did not converge.

breaks

The cell boundaries found by a call the hist.

hessian

Hessian Matrix. Only where Hessian = TRUE.

tval

t-statistics of regression coefficient estimates.

rdf

Degrees of freedom.

pval

P-values of regression coefficients estimates.

sds

Standard errors of regression coefficient estimates.

Author(s)

David Scott d.scott@auckland.ac.nz, Xinxing Li xli053@aucklanduni.ac.nz

References

Barndorff-Nielsen, O. (1977). Exponentially Decreasing Distribution for the Logarithm of Particle Size. In Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol. 353, pp. 401–419.

Prause, K. (1999). The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg.

Trendall, Richard (2005). hypReg: A Function for Fitting a Linear Regression Model in R with Hyperbolic Error. Masters Thesis, Statistics Faculty, University of Auckland.

Paolella, Marc S. (2007). Intermediate Probability: A Compitational Approach. pp. 415 -Chichester: Wiley.

Scott, David J. and Wurtz, Diethelm and Chalabi, Yohan, (2011). Fitting the Hyperbolic Distribution with R: A Case Study of Optimization Techniques. In preparation.

Stryhn, H. and Christensen, J. (2003). Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology. ISVEE X.

See Also

print.summary.hyperblm prints the summary output in a table. hyperblm fits linear model with hyperbolic error distribution. print.hyperblm prints the regression result in a table. coef.hyperblm obtains the regression coefficients and error distribution parameters of the fitted model. plot.hyperblm obtains a residual vs fitted value plot, a histgram of residuals with error distribution density curve on top, a histgram of log residuals with error distribution error density curve on top and a QQ plot. tsHessian

Examples

## stackloss data example

# airflow <- stackloss[, 1]
# temperature <- stackloss[, 2]
# acid <- stackloss[, 3]
# stack <- stackloss[, 4]

# hyperblm.fit <- hyperblm(stack ~ airflow + temperature + acid,
#                          tolerance = 1e-11)

# coef.hyperblm(hyperblm.fit)
# plot.hyperblm(hyperblm.fit, breaks = 20)
# summary.hyperblm(hyperblm.fit, hessian = FALSE)


GeneralizedHyperbolic documentation built on Nov. 26, 2023, 5:07 p.m.