tucker.nonneg: sparse (semi-)nonnegative Tucker decomposition

Description Usage Arguments Details Value Note References See Also

View source: R/rTensor_Decomp.R

Description

Decomposes nonnegative tensor tnsr into core optionally nonnegative tensor Z and sparse nonnegative factor matrices U[n].

Usage

1
2
3
4
5
tucker.nonneg(tnsr, ranks, core_nonneg = TRUE, tol = 1e-04, hosvd = FALSE,
  max_iter = 500, max_time = 0, lambda = rep.int(0, length(ranks) + 1),
  L_min = 1, rw = 0.9999, bound = Inf, U0 = NULL, Z0 = NULL,
  verbose = FALSE, unfold_tnsr = length(dim(tnsr)) * prod(dim(tnsr)) <
  4000^2)

Arguments

tnsr

nonnegative tensor with K modes

ranks

an integer vector of length K specifying the modes sizes for the output core tensor Z

core_nonneg

constrain core tensor Z to be nonnegative

tol

relative Frobenius norm error tolerance

hosvd

If TRUE, apply High Order SVD to improve initial U and Z

max_iter

maximum number of iterations if error stays above tol

max_time

max running time

lambda

K+1 vector of sparsity regularizer coefficients for the factor matrices and the core tensor

L_min

lower bound for Lipschitz constant for the gradients of residual error l(Z,U) = fnorm(tnsr - ttl(Z, U)) by Z and each U

rw

controls the extrapolation weight

bound

upper bound for the elements of Z and U[[n]] (the ones that have zero regularization coefficient lambda)

U0

initial factor matrices, defaults to nonnegative Gaussian random matrices

Z0

initial core tensor Z, defaults to nonnegative Gaussian random tensor

verbose

more output algorithm progress

unfold_tnsr

precalculate tnsr to matrix unfolding by every mode (speeds up calculation, but may require lots of memory)

Details

The function uses the alternating proximal gradient method to solve the following optimization problem:

\min 0.5 \|tnsr - Z \times_1 U_1 … \times_K U_K \|_{F^2} + ∑_{n=1}^{K} λ_n \|U_n\|_1 + λ_{K+1} \|Z\|_1, \;\textit{where}\; Z ≥q 0, \, U_i ≥q 0.

If core_nonneg is FALSE, core tensor Z is allowed to have negative elements and z_{i,j}=max(0,z_{i,j}-λ_{K+1}/L_{K+1}) rule is replaced by z_{i,j}=sign(z_{i,j})max(0,|z_{i,j}|-λ_{K+1}/L_{K+1}). The method stops if either the relative improvement of the error is below the tolerance tol for 3 consequitive iterations or both the relative error improvement and relative error (wrt the tnsr norm) are below the tolerance. Otherwise it stops if the maximal number of iterations or the time limit were reached.

Value

a list:

U

nonnegative factor matrices

Z

nonnegative core tensor

est

estimate Z \times_1 U_1 … \times_K U_K

conv

method convergence indicator

resid

the Frobenius norm of the residual error l(Z,U) plus regularization penalty (if any)

n_iter

number of iterations

n_redo

number of times Z and U were recalculated to avoid the increase in objective function

diag

convergence info for each iteration

all_resids

residues

all_rel_resid_deltas

residue delta relative to the current residue

all_rel_resids

residue relative to the sqrt(||tnsr||)

Note

The implementation is based on ntds() MATLAB code by Yangyang Xu and Wotao Yin.

References

Y. Xu, "Alternating proximal gradient method for sparse nonnegative Tucker decomposition", Math. Prog. Comp., 7, 39-70, 2013.

See Also

tucker

http://www.caam.rice.edu/~optimization/bcu/


jamesyili/rTensor documentation built on May 18, 2019, 11:22 a.m.