# kpls_nipals: Non linear kernel PLS algorithm In mlesnoff/rnirs: Dimension reduction, Regression and Discrimination for Chemometrics

 kpls_nipals R Documentation

## Non linear kernel PLS algorithm

### Description

Function `kpls_nipals` fits a NIPALS KPLS (Rosipal & Trejo, 2001).

The kernel Gram matrice is internally centered before the analyses, but the data are not column-wise scaled (there is no argument `scale` in the function). If needed, the user has to do the scaling before using the function .

Row observations can eventually be weighted (using argument `weights`).

### Usage

``````
kpls_nipals(X, Y, ncomp, kern = kpol, weights = NULL,
tol = .Machine\$double.eps^0.5, maxit = 100, ...)

``````

### Arguments

 `X` A `n x p` matrix or data frame of observations. `Y` A `n x q` matrix or data frame, or a vector of length `n`, of responses. `ncomp` The number of scores (= components = latent variables) to consider. `kern` A function defining the considered kernel (Default to `kpol`). See `kpol` for syntax, and other available kernel functions. `weights` A vector of length `n` defining a priori weights to apply to the observations. Internally, weights are "normalized" to sum to 1. Default to `NULL` (weights are set to `1 / n`). `tol` Tolerance level for stopping the NIPALS iterations. `maxit` Maximum number of NIPALS iterations. `...` Optionnal arguments to pass in the kernel function defined in `kern`.

### Value

A list of outputs (see examples), such as:

 `T` KPLS scores (`n x ncomp`).

### References

Rosipal, R., Trejo, L.J., 2001. Kernel Partial Least Squares Regression in Reproducing Kernel Hilbert Space. Journal of Machine Learning Research 2, 97-123.

### Examples

``````
n <- 8
p <- 4
set.seed(1)
X <- matrix(rnorm(n * p, mean = 10), ncol = p)
y1 <- 100 * rnorm(n)
y2 <- 100 * rnorm(n)
Y <- cbind(y1, y2)
set.seed(NULL)

kpls_nipals(X, Y, ncomp = 5)

``````

mlesnoff/rnirs documentation built on April 24, 2023, 4:17 a.m.