Description Usage Arguments Value Examples
This function implements an accelerated proximal gradient method (Nesterov 2007, Beck and Teboulle 2009). It solves:
min_x (f(x) + h(x)), x \in R^dim_x
where f is smooth, convex and h is non-smooth, convex but simple in that we can easily evaluate the proximal operator of h.
1 | apg(grad_f, prox_h, dim_x, opts)
|
grad_f |
A function that computes the gradient of f : grad_f(v,opts) = df(v)/dv |
prox_h |
A function that computes the proximal operator of h : prox_h(v,t,opts) = argmin_x (t*h(x) + 1/2 * norm(x-v)^2) |
dim_x |
The dimension of the unknown x |
opts |
List of parameters, both for the
In
addition, |
A list with x
, the solution of the problem:
min_x (f(x) + h(x)), x \in R^dim_x ,
and t
, the last step size.
1 2 3 4 5 6 7 8 9 10 | # Solve a Lasso problem:
# min_x 1/2 norm( A%*%x - b )^2 + lambda ||x||_1
n <- 50
m <- 20
lambda <- 1
A <- matrix(rnorm(m*n), nrow=n)
b <- rnorm(n)
r <- apg(grad.quad, prox.l1, m, list(A=A, b=b, lambda=lambda) )
# This gives the same result as:
# m <- glmnet(A,b,alpha=1, standardize=FALSE,lambda=1/50,intercept=FALSE)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.