learn_laplacian_pgd_connected | R Documentation |
Learns sparse Laplacian matrix of a connected graph
Learns a connected graph via non-convex, sparse promoting regularization functions such as MCP, SCAD, and re-weighted l1-norm.
learn_laplacian_pgd_connected( S, w0 = "naive", alpha = 0, sparsity_type = "none", eps = 1e-04, gamma = 2.001, eta = 0.01, backtrack = TRUE, maxiter = 10000, reltol = 1e-05, verbose = TRUE )
S |
a pxp sample covariance/correlation matrix, where p is the number of nodes of the graph |
w0 |
initial estimate for the weight vector the graph or a string selecting an appropriate method. Available methods are: "qp": finds w0 that minimizes ||ginv(S) - L(w0)||_F, w0 >= 0; "naive": takes w0 as the negative of the off-diagonal elements of the pseudo inverse, setting to 0 any elements s.t. w0 < 0 |
alpha |
hyperparameter to control the level of sparsiness of the estimated graph |
sparsity_type |
type of non-convex sparsity regularization. Available methods are: "mcp", "scad", "re-l1", and "none" |
eps |
hyperparameter for the re-weighted l1-norm |
eta |
learning rate |
backtrack |
whether to update the learning rate using backtrack line search |
maxiter |
maximum number of iterations |
reltol |
relative tolerance on the Frobenius norm of the estimated Laplacian matrix as a stopping criteria |
verbose |
whether or not to show a progress bar displaying the iterations |
A list containing possibly the following elements:
|
the estimated Laplacian Matrix |
|
the estimated Adjacency Matrix |
|
number of iterations taken to converge |
|
boolean flag to indicate whether or not the optimization converged |
|
elapsed time recorded at every iteration |
Ze Vinicius, Jiaxi Ying, and Daniel Palomar
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.