Vertex Discriminant Analysis
Description
Multicategory Vertex Discriminant Analysis (VDA) for classifying an outcome with k possible categories and p features, based on a data set of n cases. The default penalty function is Ridge. Lasso, Euclidean, and a mixture of Lasso and Euclidean are also available. Please refer to vda.le
Usage
1 2 
Arguments
x 
n x p matrix or data frame containing the cases for each feature. The rows correspond to cases and the columns to the features. Intercept column is not included in this. 
y 
n x 1 vector representing the outcome variable. Each element denotes which one of the k classes that case belongs to 
lambda 
Tuning constant. The default value is set as 1/n. Can also be found using 
Details
Outcome classification is based on linear discrimination among the vertices of a regular simplex in a k1dimension Euclidean space, where each vertex represents one of the categories. Discrimination is phrased as a regression problem involving εinsensitive residuals and a L2 quadratic ("ridge") penalty on the coefficients of the linear predictors. The objective function can by minimized by a primal MajorizationMinimization (MM) algorithm that
relies on quadratic majorization and iteratively reweighted least squares,
is simpler to program than algorithms that pass to the dual of the original optimization problem, and
can be accelerated by step doubling.
Comparisons on real and simulated data suggest that the MM algorithm for VDA is competitive in statistical accuracy and computational speed with the best currently available algorithms for discriminant analysis, such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), knearest neighbor, onevsrest binary support vector machines, multicategory support vector machines, classification and regression tree (CART), and random forest prediction.
Value
feature 
Feature matrix 
stand.feature 
The feature matrix where the all columns are standardized, with the exception of the intercept column which is left unstandardized. 
class 
Class vector 
cases 
Number of cases, n. 
classes 
Number of classes, k. 
features 
Number of feautres, p. 
lambda 
Tuning constant 
predicted 
Vector of predicted category values based on VDA. 
coefficient 
The estimated coefficient matrix where the columns represent the coefficients for each predictor variable corresponding to 
training_error_rate 
The percentage of instances in the training set where the predicted outcome category is not equal to the case's true category. 
call 
The matched call 
attr(,"class") 
The function results in an object of class "vda.r" 
Author(s)
Edward Grant, Xia Li, Kenneth Lange, Tong Tong Wu
Maintainer: Edward Grant edward.m.grant@gmail.com
References
Lange, K. and Wu, T.T. (2008) An MM Algorithm for Multicategory Vertex Discriminant Analysis. Journal of Computational and Graphical Statistics, Volume 17, No 3, 527544.
See Also
For determining the optimal values for lambda
, refer to cv.vda.r
.
For highdimensional setting and conduct variable selection, please refer to vda.le
.
Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21  # load zoo data
# column 1 is name, columns 2:17 are features, column 18 is class
data(zoo)
#matrix containing all predictor vectors
x < zoo[,2:17]
#outcome class vector
y < zoo[,18]
#run VDA
out < vda.r(x, y)
#Predict five cases based on VDA
fivecases < matrix(0,5,16)
fivecases[1,] < c(1,0,0,1,0,0,0,1,1,1,0,0,4,0,1,0)
fivecases[2,] < c(1,0,0,1,0,0,1,1,1,1,0,0,4,1,0,1)
fivecases[3,] < c(0,1,1,0,1,0,0,0,1,1,0,0,2,1,1,0)
fivecases[4,] < c(0,0,1,0,0,1,1,1,1,0,0,1,0,1,0,0)
fivecases[5,] < c(0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0)
predict(out, fivecases)
