do.llp | R Documentation |
While Principal Component Analysis (PCA) aims at minimizing global estimation error, Local Learning
Projection (LLP) approach tries to find the projection with the minimal local
estimation error in the sense that each projected datum can be well represented
based on ones neighbors. For the kernel part, we only enabled to use
a gaussian kernel as suggested from the original paper. The parameter lambda
controls possible rank-deficiency of kernel matrix.
do.llp( X, ndim = 2, type = c("proportion", 0.1), symmetric = c("union", "intersect", "asymmetric"), preprocess = c("center", "scale", "cscale", "decorrelate", "whiten"), t = 1, lambda = 1 )
X |
an (n\times p) matrix or data frame whose rows are observations |
ndim |
an integer-valued target dimension. |
type |
a vector of neighborhood graph construction. Following types are supported;
|
symmetric |
one of |
preprocess |
an additional option for preprocessing the data.
Default is "center". See also |
t |
bandwidth for heat kernel in (0,∞). |
lambda |
regularization parameter for kernel matrix in [0,∞). |
a named list containing
an (n\times ndim) matrix whose rows are embedded observations.
a list containing information for out-of-sample prediction.
a (p\times ndim) whose columns are basis for projection.
wu_local_2007Rdimtools
## generate data set.seed(100) X <- aux.gensamples(n=100, dname="crown") ## test different lambda - regularization - values out1 <- do.llp(X,ndim=2,lambda=0.1) out2 <- do.llp(X,ndim=2,lambda=1) out3 <- do.llp(X,ndim=2,lambda=10) # visualize opar <- par(no.readonly=TRUE) par(mfrow=c(1,3)) plot(out1$Y, pch=19, main="lambda=0.1") plot(out2$Y, pch=19, main="lambda=1") plot(out3$Y, pch=19, main="lambda=10") par(opar)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.