fit a lowrank svd to a complete matrix by alternating orthogonal ridge regression. Special sparsematrix classes available for very large matrices, including "SparseplusLowRank" versions for row and column centered sparse matrices.
1 2 
x 
An m by n matrix. Large matrices can be in "sparseMatrix" format, as
well as "SparseplusLowRank". The latter arise after centering sparse
matrices, for example with 
rank.max 
The maximum rank for the solution. This is also the dimension of the left and right matrices of orthogonal singular vectors. 'rank.max' should be no bigger than 'min(dim(x)'. 
lambda 
The regularization parameter. 
thresh 
convergence threshold, measured as the relative changed in the Frobenius norm between two successive estimates. 
maxit 
maximum number of iterations. 
trace.it 
with 
warm.start 
an svd object can be supplied as a warm start. If the solution requested has higher rank than the warm start, the additional subspace is initialized with random Gaussians (and then orthogonalized wrt the rest). 
final.svd 
Although in theory, this algorithm converges to the solution to a
nuclearnorm regularized lowrank matrix approximation problem,
with potentially some singular values equal to zero, in practice only
nearzeros are achieved. This final step does one more iteration with

This algorithm solves the problem
\min XM_F^2 +λ M_*
subject to rank(M)≤q r, where M_* is the nuclear norm of M (sum of singular values). It achieves this by solving the related problem
\min XAB'_F^2 +λ/2 (A_F^2+B_F^2)
subject to rank(A)=rank(B)≤q r. The solution is a rankrestricted, softthresholded SVD of X.
An svd object is returned, with components "u", "d", and "v".
u 
an m by 
d 
a vector of length 
v 
an n by 
Trevor Hastie, Rahul Mazumder
Maintainer: Trevor Hastie hastie@stanford.edu
Rahul Mazumder, Trevor Hastie and Rob Tibshirani (2010)
Spectral Regularization Algorithms for Learning Large Incomplete
Matrices,
http://www.stanford.edu/~hastie/Papers/mazumder10a.pdf
Journal of Machine Learning Research 11 (2010) 22872322
biScale
, softImpute
, Incomplete
,
lambda0
, impute
, complete
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20  #create a matrix and run the algorithm
set.seed(101)
n=100
p=50
J=25
np=n*p
x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5
fit=svd.als(x,rank=25,lambda=50)
fit$d
pmax(svd(x)$d50,0)
# now create a sparse matrix and do the same
nnz=trunc(np*.3)
inz=sample(seq(np),nnz,replace=FALSE)
i=row(x)[inz]
j=col(x)[inz]
x=rnorm(nnz)
xS=sparseMatrix(x=x,i=i,j=j)
fit2=svd.als(xS,rank=20,lambda=7)
fit2$d
pmax(svd(as.matrix(xS))$d7,0)

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.