kplsrda | R Documentation |
Discrimination (DA) based on kernel PLSR (KPLSR)
The training variable y
(univariate class membership) is transformed to a dummy table containing nclas
columns, where nclas
is the number of classes present in y
. Each column is a dummy variable (0/1). Then, a kernel PLSR (KPLSR) is run on the X-
data and the dummy table, returning predictions of the dummy variables. For a given observation, the final prediction is the class corresponding to the dummy variable for which the prediction is the highest.
kplsrda(X, y, weights = NULL, nlv, kern = "krbf", ...)
## S3 method for class 'Kplsrda'
predict(object, X, ..., nlv = NULL)
X |
For main functions: Training X-data ( |
y |
Training class membership ( |
weights |
Weights ( |
nlv |
The number(s) of LVs to calculate. |
kern |
Name of the function defining the considered kernel for building the Gram matrix. See |
object |
A fitted model, output of a call to the main functions. |
... |
Optional arguments to pass in the kernel function defined in |
See the examples.
n <- 50 ; p <- 8
Xtrain <- matrix(rnorm(n * p), ncol = p)
ytrain <- sample(c(1, 4, 10), size = n, replace = TRUE)
#ytrain <- sample(c("a", "10", "d"), size = n, replace = TRUE)
m <- 5
Xtest <- Xtrain[1:m, ] ; ytest <- ytrain[1:m]
nlv <- 2
fm <- kplsrda(Xtrain, ytrain, nlv = nlv)
names(fm)
predict(fm, Xtest)
pred <- predict(fm, Xtest)$pred
err(pred, ytest)
predict(fm, Xtest, nlv = 0:nlv)$posterior
predict(fm, Xtest, nlv = 0)$posterior
predict(fm, Xtest, nlv = 0:nlv)$pred
predict(fm, Xtest, nlv = 0)$pred
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.