Description Usage Arguments Details Value Author(s) References See Also Examples
Multicategory Vertex Discriminant Analysis (VDA) for classifying an outcome with k possible categories and p features, based on a data set of n cases. The default penalty function is Ridge. Lasso, Euclidean, and a mixture of Lasso and Euclidean are also available. Please refer to vda.le
1 2 |
x |
n x p matrix or data frame containing the cases for each feature. The rows correspond to cases and the columns to the features. Intercept column is not included in this. |
y |
n x 1 vector representing the outcome variable. Each element denotes which one of the k classes that case belongs to |
lambda |
Tuning constant. The default value is set as 1/n. Can also be found using |
Outcome classification is based on linear discrimination among the vertices of a regular simplex in a k-1-dimension Euclidean space, where each vertex represents one of the categories. Discrimination is phrased as a regression problem involving ε-insensitive residuals and a L2 quadratic ("ridge") penalty on the coefficients of the linear predictors. The objective function can by minimized by a primal Majorization-Minimization (MM) algorithm that
relies on quadratic majorization and iteratively re-weighted least squares,
is simpler to program than algorithms that pass to the dual of the original optimization problem, and
can be accelerated by step doubling.
Comparisons on real and simulated data suggest that the MM algorithm for VDA is competitive in statistical accuracy and computational speed with the best currently available algorithms for discriminant analysis, such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), k-nearest neighbor, one-vs-rest binary support vector machines, multicategory support vector machines, classification and regression tree (CART), and random forest prediction.
feature |
Feature matrix |
stand.feature |
The feature matrix where the all columns are standardized, with the exception of the intercept column which is left unstandardized. |
class |
Class vector |
cases |
Number of cases, n. |
classes |
Number of classes, k. |
features |
Number of feautres, p. |
lambda |
Tuning constant |
predicted |
Vector of predicted category values based on VDA. |
coefficient |
The estimated coefficient matrix where the columns represent the coefficients for each predictor variable corresponding to |
training_error_rate |
The percentage of instances in the training set where the predicted outcome category is not equal to the case's true category. |
call |
The matched call |
attr(,"class") |
The function results in an object of class "vda.r" |
Edward Grant, Xia Li, Kenneth Lange, Tong Tong Wu
Maintainer: Edward Grant edward.m.grant@gmail.com
Lange, K. and Wu, T.T. (2008) An MM Algorithm for Multicategory Vertex Discriminant Analysis. Journal of Computational and Graphical Statistics, Volume 17, No 3, 527-544.
For determining the optimal values for lambda
, refer to cv.vda.r
.
For high-dimensional setting and conduct variable selection, please refer to vda.le
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # load zoo data
# column 1 is name, columns 2:17 are features, column 18 is class
data(zoo)
#matrix containing all predictor vectors
x <- zoo[,2:17]
#outcome class vector
y <- zoo[,18]
#run VDA
out <- vda.r(x, y)
#Predict five cases based on VDA
fivecases <- matrix(0,5,16)
fivecases[1,] <- c(1,0,0,1,0,0,0,1,1,1,0,0,4,0,1,0)
fivecases[2,] <- c(1,0,0,1,0,0,1,1,1,1,0,0,4,1,0,1)
fivecases[3,] <- c(0,1,1,0,1,0,0,0,1,1,0,0,2,1,1,0)
fivecases[4,] <- c(0,0,1,0,0,1,1,1,1,0,0,1,0,1,0,0)
fivecases[5,] <- c(0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0)
predict(out, fivecases)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.