Iterative methods for matrix completion that use nuclear-norm regularization. There are two main approaches.The one approach uses iterative soft-thresholded svds to impute the missing values. The second approach uses alternating least squares. Both have an "EM" flavor, in that at each iteration the matrix is completed with the current estimate. For large matrices there is a special sparse-matrix class named "Incomplete" that efficiently handles all computations. The package includes procedures for centering and scaling rows, columns or both, and for computing low-rank SVDs on large sparse centered matrices (i.e. principal components)
|Author||Trevor Hastie <firstname.lastname@example.org> and Rahul Mazumder <email@example.com>|
|Date of publication||2015-04-08 00:42:55|
|Maintainer||Trevor Hastie <firstname.lastname@example.org>|
biScale: standardize a matrix to have optionally row means zero and...
complete: make predictions from an svd object
deBias: Recompute the '$d' component of a '"softImpute"' object...
Incomplete: create a matrix of class 'Incomplete'
Incomplete-class: Class '"Incomplete"'
lambda0: compute the smallest value for 'lambda' such that...
softImpute: impute missing values for a matrix via nuclear-norm...
softImpute-internal: Internal softImpute functions
SparseplusLowRank-class: Class '"SparseplusLowRank"'
splr: create a 'SparseplusLowRank' object
svd.als: compute a low rank soft-thresholded svd by alternating...