The Minimum Distance Rule using Modified Empirical Bayes (MDMEB) classifier

Share:

Description

Given a set of training data, this function builds the MDMEB classifier from Srivistava and Kubokawa (2007). The MDMEB classifier is an adaptation of the linear discriminant analysis (LDA) classifier that is designed for small-sample, high-dimensional data. Srivastava and Kubokawa (2007) have proposed a modification of the standard maximum likelihood estimator of the pooled covariance matrix, where only the largest 95 their corresponding eigenvectors are kept. The resulting covariance matrix is then shrunken towards a scaled identity matrix. The value of 95 default and can be changed via the eigen_pct argument.

The MDMEB classifier is an adaptation of the linear discriminant analysis (LDA) classifier that is designed for small-sample, high-dimensional data. Srivastava and Kubokawa (2007) have proposed a modification of the standard maximum likelihood estimator of the pooled covariance matrix, where only the largest 95 are kept. The resulting covariance matrix is then shrunken towards a scaled identity matrix.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
mdmeb(x, ...)

## Default S3 method:
mdmeb(x, y, prior = NULL, eigen_pct = 0.95, ...)

## S3 method for class 'formula'
mdmeb(formula, data, prior = NULL, ...)

## S3 method for class 'mdmeb'
predict(object, newdata, ...)

Arguments

x

matrix containing the training data. The rows are the sample observations, and the columns are the features.

...

additional arguments

y

vector of class labels for each training observation

prior

vector with prior probabilities for each class. If NULL (default), then equal probabilities are used. See details.

eigen_pct

the percentage of eigenvalues kept

formula

A formula of the form groups ~ x1 + x2 + ... That is, the response is the grouping factor and the right hand side specifies the (non-factor) discriminators.

data

data frame from which variables specified in formula are preferentially to be taken.

object

trained mdmeb object

newdata

matrix of observations to predict. Each row corresponds to a new observation.

Details

The matrix of training observations are given in x. The rows of x contain the sample observations, and the columns contain the features for each training observation.

The vector of class labels given in y are coerced to a factor. The length of y should match the number of rows in x.

An error is thrown if a given class has less than 2 observations because the variance for each feature within a class cannot be estimated with less than 2 observations.

The vector, prior, contains the a priori class membership for each class. If prior is NULL (default), the class membership probabilities are estimated as the sample proportion of observations belonging to each class. Otherwise, prior should be a vector with the same length as the number of classes in y. The prior probabilties should be nonnegative and sum to one.

Value

mdmeb object that contains the trained MDMEB classifier

list predicted class memberships of each row in newdata

References

Srivastava, M. and Kubokawa, T. (2007). "Comparison of Discrimination Methods for High Dimensional Data," Journal of the Japanese Statistical Association, 37, 1, 123-134.

Srivastava, M. and Kubokawa, T. (2007). "Comparison of Discrimination Methods for High Dimensional Data," Journal of the Japanese Statistical Association, 37, 1, 123-134.

Examples

1
2
3
4
5
6
7
8
n <- nrow(iris)
train <- sample(seq_len(n), n / 2)
mdmeb_out <- mdmeb(Species ~ ., data = iris[train, ])
predicted <- predict(mdmeb_out, iris[-train, -5])$class

mdmeb_out2 <- mdmeb(x = iris[train, -5], y = iris[train, 5])
predicted2 <- predict(mdmeb_out2, iris[-train, -5])$class
all.equal(predicted, predicted2)

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.