R/RcppExports.R

Defines functions ht tsmvr_solve

# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393

#' Truly Sparse Multivariate Regression
#'
#' Solver for multivariate regression with covariance/precision
#' matrix estimation and absolute sparsity constraints.
#'
#' The tsmvr solver works by alternating blockwise coordinate descent,
#' where in each iteration there is an update of the regressor matrix
#' (the B-step) followed by an update of the precision matrix (the Omega-step).
#' While the B-step update is always made by gradient descent,
#' for the Omega-step the user has the freedom to choose either
#' gradient descent or direct minimization. In this package
#' the first method is called 'gd-gd' and the second method is
#' called 'gd-min'. While gd-min is faster than gd-gd, for some
#' problems direct minimization causes the solution to explode in a
#' manner similar to that of gradient descent with a too large learning
#' rate. As usual, setting the learning rate too large for gradient
#' descent itself will also cause the solution to explode. The best way
#' to find a solution is to use trial and error (or some more principled
#' method) to find the largest learning rate(s) that yield convergence
#' in either gd-gd mode or gd-min mode.
#'
#' In general, \code{eta1} may be set as high as 0.1 or 0.2 for some
#' nicely behaved problems, and it is rare that any problem can
#' be solved with a larger learning rate. Similarly, \code{eta2}
#' may be set as high as 0.5 for some generous problems, and it
#' is rare that larger values will ever work on any problem. For most
#' problems \code{eta1} and \code{eta2} will need to be set to lower
#' value, anywhere from 0.1 down to 0.0001.
#'
#' The sparsity parameters \code{s1} and \code{s2} specify the
#' sparsity for B and Omega matrices as the algorithm iterates.
#' They act as regularizers, constraining the space of possible
#' solutions at each iteration. For real-world problems, the best
#' values of \code{s1} and \code{s2} need to be found by
#' cross-validation and gridsearch. \code{s2} is lower bounded by
#' \code{q}, since the covariance matrix must at least be diagonal.
#'
#' 10E-4 is usually a good value for \code{epsilon}. The author
#' rarely finds problems where smaller values of \code{epsilon} gave
#' solutions with better at out-of-sample prediction.
#' Alternatively, good predictive solutions can often be found
#' using values of 10E-3 or even 10E-2.
#'
#' For speed, tsmvr_solve is implemented in Rcpp.
#'
#' @param X design matrix (n-by-p)
#' @param Y response matrix (n-by-q)
#' @param s1 sparsity parameter for regression matrix (positive integer)
#' @param s2 sparsity parameter for covariance matrix (positive integer)
#' @param B_type type of descent for regression steps (string: 'gd')
#' @param Omega_type (string: 'gd' or 'min')
#' @param eta1 B-step learning rate (positive numeric)
#' @param eta2 Omega-step learning rate (positive numeric)
#' @param rho1 B-step learning rate (positive numeric)
#' @param rho2 Omega-step learning rate (positive numeric)
#' @param beta1 B-step learning rate (positive numeric)
#' @param beta2 Omega-step learning rate (positive numeric)
#' @param epsilon convergence parameter (positive numeric)
#' @param max_iter maximum number of iterations (positive integer)
#' @param skip iteration skip frequency for printing to screen (positive integer)
#' @param quiet whether or not to operate   (bool)
#' @param suppress whether or not to suppress warnings (bool)
#'
#' @references \insertRef{chen2016high}{tsmvr}
#'
#' @return A list of algorithm output, including:
#'
#' \code{B_hat} - final iterate of the regression matrix \cr
#' \code{Omega_hat} - final iterate of the precision matrix \cr
#' \code{objective} - final value of the objective function \cr
#' \code{B_history} - list of all regression matrix iterates \cr
#' \code{Omega_history} - list of all precision matrix iterates \cr
#' \code{objective_history} - list of the objective function values for each iteration \cr
#' \code{iterations} - number of iterations \cr
#' \code{time} - algorithm time (seconds) \cr
#' \code{Y_hat} - fitted response, given by \code{X*B_hat} \cr
#' \code{residuls} - difference between the actual and fitted responses
#'
#'
NULL

ht <- function(X, s, ss = FALSE) {
    .Call(`_tsmvr_ht`, X, s, ss)
}

tsmvr_solve <- function(X, Y, s1, s2, pars) {
    .Call(`_tsmvr_tsmvr_solve`, X, Y, s1, s2, pars)
}

# Register entry points for exported C++ functions
methods::setLoadAction(function(ns) {
    .Call('_tsmvr_RcppExport_registerCCallable', PACKAGE = 'tsmvr')
})
spcorum/tsmvr documentation built on Aug. 31, 2019, 8:58 p.m.