Function to generate 3-class classification benchmarking data as introduced by J.H. Friedman (1989)
the problem setting (integer 1,2,...,6).
number of variables (6, 10, 20 or 40).
sample size (number of observations, >=6).
When J.H. Friedman introduced the Regularized Discriminant Analysis
rda) in 1989, he used artificially generated data
to test the procedure and to examine its performance in comparison to
Linear and Quadratic Discriminant Analysis
6 different settings were considered to demonstrate potential strengths and weaknesses of the new method:
equal spherical covariance matrices,
unequal spherical covariance matrices,
equal, highly ellipsoidal covariance matrices with mean differences in low-variance subspace,
equal, highly ellipsoidal covariance matrices with mean differences in high-variance subspace,
unequal, highly ellipsoidal covariance matrices with zero mean differences and
unequal, highly ellipsoidal covariance matrices with nonzero mean differences.
For each of the 6 settings data was generated with 6, 10, 20 and 40 variables.
Classification performance was then measured by repeatedly creating training-datasets of 40 observations and estimating the misclassification rates by test sets of 100 observations.
The number of classes is always 3, class labels are assigned randomly (with equal probabilities) to observations, so the contributions of classes to the data differs from dataset to dataset. To make sure covariances can be estimated at all, there are always at least two observations from each class in a dataset.
asmatrix either a data frame or a matrix with
samplesize rows and
p+1 columns, the first column containing
the class labels, the remaining columns being the variables.
Christian Röver, [email protected]
Friedman, J.H. (1989): Regularized Discriminant Analysis. In: Journal of the American Statistical Association 84, 165-175.
1 2 3 4 5 6 7
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.