Description Usage Arguments Details Examples
For large-size (>= 5000) datasets, we suggest first partitioning the datasets into several groups, then we run SHARP for each group, and finally and we ensemble the results of each group by a similarity-based meta-clustering algorithm.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | SHARP_large(
scExp,
ncells,
ensize.K,
reduced.dim,
partition.ncells,
hmethod,
N.cluster,
enpN.cluster,
indN.cluster,
minN.cluster,
maxN.cluster,
sil.thre,
height.Ntimes,
flashmark,
flag,
n.cores,
forview,
rM,
rN.seed
)
|
scExp |
input single-cell expression matrix |
ncells |
number of single cells |
ensize.K |
number of applications of random projection for ensemble |
reduced.dim |
the dimension to be reduced to |
partition.ncells |
number of cells for each partition when using SHARP_large |
For each partition (or group), the default number of cells is set to 2000 for each group. The users can also set a different number according to the computational capability of their own local computers. The suggested criteria to set this number is that as long as SHARP_small can run in a fast enough (depending on users' requirements) way for the selected number of single cells.
1 | enresults = SHARP_large(scExp, ncells, ensize.K, reduced.dim, partition.ncells)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.