Description Usage Arguments Value Author(s) References See Also Examples
The main program.
1 | cep.lda(y,x,xNew,L,mcep,nw,k,cv,tol)
|
y |
n-vector indicating group membership of training time series. |
x |
N by n matrix containing n training time series each with length N. |
xNew |
N by nNew matrix, containing nNew time series whose memberships are predicted. |
L |
Number of cepstral coefficients used in the lda. If FALSE, cross-validation is used for the data driven selection of L. Default is FALSE. |
mcep |
Maximum number of cepstral coefficients considerd. Default is set to 10. |
nw |
Width of tapers used in multitaper spectral estimation. Default is set to 4. |
k |
Number of tapers used in multitaper spectral estimation. Default is set to 7. |
cv |
If TRUE, returns results (classes and posterior probabilities) for leave-one-out cross-validation. Note that, if the prior is estimated, the proportions in the whole dataset are used. As with the standard lda function, if used, prediction on a test data set cannot be done and weight functions are not produced (simular to the predict.lda). Default is FALSE. |
tol |
Tolerance to decide if a matrix is singular; it will reject variables and linear combinations of unit-variance variables whose variance is less than tol^2. |
List with 5 elements
C.lda |
lda output on the cepstral scale. Similar to output of |
cep.data |
Data frame containing cepstral coefficients and group information from training data. |
Lopt |
Number of cepstral coefficients used. |
lspec |
Estimated log-spectral weight functions. |
predict |
Results of classification. If external data xNew is supplied, these data are classified. If not, biased classification of the training data x is returned. For unbiased leave-out-one cross-validated classification of training data, use cv=TRUE. |
Zeda Li <zeda.li@temple.edu>; Robert Krafty <rkrafty@pitt.edu>
Krafty, RT (2016) Discriminant Analysis of Time Series in the Presence of Within-Group Spectral Variability. Journal of Time Series Analysis
predict.ceplda
, plot.ceplda
, print.ceplda
, Lopt.get
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | ## Simulate training data
nj = 50 #number of series in training data
N = 500 #length of time series
traindata1 <- r.cond.ar2(N=N,nj=nj,r.phi1=c(.01,.7),r.phi2=c(-.12,-.06),r.sig2=c(.3,3))
traindata2 <- r.cond.ar2(N=N,nj=nj,r.phi1=c(.5,1.2),r.phi2=c(-.36,-.25),r.sig2=c(.3,3))
traindata3 <- r.cond.ar2(N=N,nj=nj,r.phi1=c(.9,1.5),r.phi2=c(-.56,-.75),r.sig2=c(.3,3))
train <- cbind(traindata1$X,traindata2$X,traindata3$X)
y <- c(rep(1,nj),rep(2,nj),rep(3,nj))
## Fit the discriminant analysis
fit <- cep.lda(y,train)
fit #displays group means and cepstral weight functions
## Discriminant plot
plot(fit)
## Plot log-spectral weights
par(mfrow=c(1,2))
plot(fit$lspec$frq, fit$lspec$dsc[,1],type='l',xlab="frequency", ylab="log-spectral weights")
plot(fit$lspec$frq, fit$lspec$dsc[,2],type='l',xlab="frequency", ylab="log-spectral weights")
## Bias classification of training data
mean(fit$predict$class == y) #classifictaion rate
table(y,fit$predict$class)
## Fit the discriminant analysis while classifing training data via cross-validation
fit.cv <- cep.lda(y,train, cv=TRUE)
mean(fit.cv$predict$class == y) #classifictaion rate
table(y,fit.cv$predict$class)
## Simulate test data
testdata1 <- r.cond.ar2(N=N,nj=nj,r.phi1=c(.01,.7),r.phi2=c(-.12,-.06),r.sig2=c(.3,3))
testdata2 <- r.cond.ar2(N=N,nj=nj,r.phi1=c(.5,1.2),r.phi2=c(-.36,-.25),r.sig2=c(.3,3))
testdata3 <- r.cond.ar2(N=N,nj=nj,r.phi1=c(.9,1.5),r.phi2=c(-.56,-.75),r.sig2=c(.3,3))
test <- cbind(testdata1$X,testdata2$X,testdata3$X)
yTest <- c(rep(1,nj),rep(2,nj),rep(3,nj))
## Fit discriminant analysis and classify new data
fit.pre <- cep.lda(y,train,test)
mean(fit.pre$predict$class == y)
table(yTest,fit.pre$predict$class)
|
Loading required package: astsa
Loading required package: MASS
Loading required package: class
Loading required package: multitaper
Optimal L selected:
[1] 7
Linear Discriminat Analysis results:
Call:
lda(b, data = D.hat0, CV = FALSE, tol = tol)
Prior probabilities of groups:
1 2 3
0.3333333 0.3333333 0.3333333
Group means:
C0 C1 C2 C3 C4 C5
1 0.3423284 0.4795663 -0.022610659 -0.005007833 -0.012871940 -0.004088972
2 0.1570039 1.2373639 0.113795715 -0.047083614 -0.005310349 0.003602583
3 0.2711212 1.7417336 -0.006678723 -0.408506567 -0.350204275 -0.163858097
C6 C7
1 0.0007526505 -0.005135245
2 -0.0095896052 -0.007464212
3 -0.0095181663 0.058085390
Coefficients of linear discriminants:
LD1 LD2
C0 0.03330297 0.138069
C1 4.61036968 -4.056601
C2 -1.81696548 4.503829
C3 -6.88861108 -1.830867
C4 -3.79847338 -8.748704
C5 -1.11768995 -4.602806
C6 -1.63522831 3.158840
C7 0.76658152 3.080357
Proportion of trace:
LD1 LD2
0.9345 0.0655
[1] 0.9866667
y 1 2 3
1 48 2 0
2 0 50 0
3 0 0 50
[1] 0.9866667
y 1 2 3
1 48 2 0
2 0 50 0
3 0 0 50
[1] 0.98
yTest 1 2 3
1 47 3 0
2 0 50 0
3 0 0 50
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.