Description Usage Arguments Details Value Author(s) References See Also
The main function for fitting sparse additive models via the additive hierbasis estimator
1 2 3 4 |
X |
An n x p matrix of covariates. |
y |
A univariate vector respresenting the response. |
nbasis |
The number of basis functions to be used in the
basis expansion of |
max.lambda |
The largest value of λ penalizing
the hierarchical penalty. Default:
|
lam.min.ratio |
The ratio of the smallest value of λ
to the largest λ.
Default: |
nlam |
The number λ's to compute the |
beta.mat |
An initial estimate of the parameter beta, a
|
alpha |
A scalar between 0 and 1 controlling the balance between the sparsity penalty and the hierarchical penalty. Default is 0.5. |
m.const |
Smoothing parameter controlling the degree of polynomial
smoothing/decay performed by the weights in the penalty
Ω.Default: |
max.iter |
Maximum number of iterations for block coordinate descent. |
tol |
Tolerance/stopping precision for the block coordinate descent algorithm. |
type |
Specifies either Gaussian regression ( |
weights |
(New parameter for |
basis.type |
(New parameter for |
Solve the multivariate minimization problem (see Haris at al. (2016) for details) for β:
argmin_{β_1,…, β_p} (1/(2n)) || y - ∑_jΨ^{(j)}_K β_j ||^2_2 + λ^2 (1 - α) ∑_j || Ψ^{(j)}_K β_j ||_2 + λ α ∑_j Ω_j( β_j; m ) ,
where β_j is a vector of length K = nbasis
and the summation is over the index j = 1, ..., p for p
covariatres.
The penalty function Ω_j(β_j; m) is given by
∑_k w_{k, m}β_{j, [k:K]},
where β_{j, [k:K]} is the vector of coefficients for the
j-th predictor and represents beta[k:K]
for the corresponding
vector beta
, summing over k = 1, ..., K.
Finally, the weights w_{k, m} are given (by default)
w_{k, m} = k^m - (k - 1)^m,
where m denotes the 'smoothness level'. For details see Haris et al. (2016).
An object of class additivehierbasis with the following elements
X, y |
The original |
beta |
The |
.
beta.array |
The |
intercept |
The |
fitted.values |
The |
basis.expansion |
The |
basis.expansion.means |
The |
ybar |
Mean of the response vector. |
lambdas |
Sequence of tuning parameters lambda used for penalizing the fits. |
fitted.values |
The |
alpha |
The scalar controlling the balance between the sparsity-inducing penalty and the hierarchical-sparsity-inducing penalty. |
m.const |
The |
nbasis |
The maximum number of basis functions used for computing
the basis expansion of |
max.iter |
Maximum number of iterations used for the block coordinate descent algorithm. |
tol |
Tolerance/stopping precision used for block coordinate descent. |
weights |
The weights used for smoothing/penalizing the basis functions. |
active |
The size of the active set (number of nonzero β) per tuning parameter λ. |
active.mat |
The size of the active set per predictor (rowwise), per tuning parameter λ (columnwise). |
type |
The specified family |
basis.type |
Specified basis expansion family, polynomial, trigonometric, or wavelet. |
Annik Gougeon, David Fleischer (david.fleischer@mail.mcgill.ca).
Haris, A., Shojaie, A. and Simon, N. (2016). Nonparametric Regression with Adaptive Smoothness via a Convex Hierarchical Penalty. Available on request by authors.
The original AdditiveHierBasis
function, as implemented by
Haris et al. (2016) can be found via
https://github.com/asadharis/HierBasis/.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.