Description Usage Arguments Value Author(s) References Examples
View source: R/learnGraphTopology.R
Learn the Laplacian matrix of a kcomponent graph
Learns a kcomponent graph on the basis of an observed data matrix. Check out https://mirca.github.io/spectralGraphTopology for code examples.
1 2 3 4 5  learn_k_component_graph(S, is_data_matrix = FALSE, k = 1,
w0 = "naive", lb = 0, ub = 10000, alpha = 0, beta = 10000,
beta_max = 1e+06, fix_beta = TRUE, rho = 0.01, m = 7,
maxiter = 10000, abstol = 1e06, reltol = 1e04, eigtol = 1e09,
record_objective = FALSE, record_weights = FALSE, verbose = TRUE)

S 
either a pxp sample covariance/correlation matrix, or a pxn data matrix, where p is the number of nodes and n is the number of features (or data points per node) 
is_data_matrix 
whether the matrix S should be treated as data matrix or sample covariance matrix 
k 
the number of components of the graph 
w0 
initial estimate for the weight vector the graph or a string selecting an appropriate method. Available methods are: "qp": finds w0 that minimizes ginv(S)  L(w0)_F, w0 >= 0; "naive": takes w0 as the negative of the offdiagonal elements of the pseudo inverse, setting to 0 any elements s.t. w0 < 0 
lb 
lower bound for the eigenvalues of the Laplacian matrix 
ub 
upper bound for the eigenvalues of the Laplacian matrix 
alpha 
L1 regularization hyperparameter 
beta 
regularization hyperparameter for the term L(w)  U Lambda U'^2_F 
beta_max 
maximum allowed value for beta 
fix_beta 
whether or not to fix the value of beta. In case this parameter is set to false, then beta will increase (decrease) depending whether the number of zero eigenvalues is lesser (greater) than k 
rho 
how much to increase (decrease) beta in case fix_beta = FALSE 
m 
in case is_data_matrix = TRUE, then we build an affinity matrix based on Nie et. al. 2017, where m is the maximum number of possible connections for a given node 
maxiter 
the maximum number of iterations 
abstol 
absolute tolerance on the weight vector w 
reltol 
relative tolerance on the weight vector w 
eigtol 
value below which eigenvalues are considered to be zero 
record_objective 
whether to record the objective function values at each iteration 
record_weights 
whether to record the edge values at each iteration 
verbose 
whether to output a progress bar showing the evolution of the iterations 
A list containing possibly the following elements:

the estimated Laplacian Matrix 

the estimated Adjacency Matrix 

the estimated weight vector 

optimization variable accounting for the eigenvalues of the Laplacian matrix 

eigenvectors of the estimated Laplacian matrix 

elapsed time recorded at every iteration 

sequence of values taken by beta in case fix_beta = FALSE 

boolean flag to indicate whether or not the optimization converged 

values of the objective function at every iteration in case record_objective = TRUE 

values of the negative loglikelihood at every iteration in case record_objective = TRUE 

sequence of weight vectors at every iteration in case record_weights = TRUE 
Ze Vinicius and Daniel Palomar
S. Kumar, J. Ying, J. V. de Miranda Cardoso, D. P. Palomar. A unified framework for structured graph learning via spectral constraints (2019). https://arxiv.org/pdf/1904.09792.pdf
1 2 3 4 5 6 7 8 9 10 11  # design true Laplacian
Laplacian < rbind(c(1, 1, 0, 0),
c(1, 1, 0, 0),
c(0, 0, 1, 1),
c(0, 0, 1, 1))
n < ncol(Laplacian)
# sample data from multivariate Gaussian
Y < MASS::mvrnorm(n * 500, rep(0, n), MASS::ginv(Laplacian))
# estimate graph on the basis of sampled data
graph < learn_k_component_graph(cov(Y), k = 2, beta = 10)
graph$Laplacian

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.