Description Usage Arguments Value See Also Examples
To find minimal cluster number which reduces Kullback-Leibler divergence no larger than a customized threshold. KL monotonically decreases with number of clusters.
1 | clustKL(id_var, ls_par, dat, ls_idxA, nIter, thKL, regQ, seed)
|
id_var |
Character of id variable |
ls_par |
Posterior mean of LMM parameters generated from postMean output |
dat |
Longitudinal data input |
ls_idxA |
List of random effect indices to project on |
nIter |
Number of iterations used in clustering optimization |
thKL |
Proportion of max KL, used as upper threshold of Kullback-Leibler divergence corresponding to chosen number of clusters |
regQ |
Positive regularization value to add to the diagonal of matrix to be inverted |
seed |
Random seed to initialize cluster centers |
list(nClusters,KLs,cluster0)
nClusters |
Vector of optimised cluster numbers corresponding to sets of random effects in |
KLs |
List of Kullback-Leibler divergence across grid of cluster numbers corresponding to |
cluster0 |
List of matrix, column k indicates cluster membership for k clusters corresponding to |
Other BayesPC main functions:
modelStan()
,
pcFit()
,
postMean()
Other Cluster number choices:
clustBoot()
1 2 3 4 5 6 7 8 9 | data(df_of_draws)
ls_par <- postMean(df_of_draws, paste0("Z", 1:10), "ID", DATASET)
ls_idxA <- list(
seq(10),
1:4,
5:7,
8:10
)
out_KL <- clustKL("ID", ls_par, DATASET, ls_idxA, 10, .1, 1e-6, 1)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.