SHARP_large: Run SHARP for large-size (by default, >= 5000) single-cell...

Description Usage Arguments Details Examples

View source: R/SHARP.R

Description

For large-size (>= 5000) datasets, we suggest first partitioning the datasets into several groups, then we run SHARP for each group, and finally and we ensemble the results of each group by a similarity-based meta-clustering algorithm.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
SHARP_large(
  scExp,
  ncells,
  ensize.K,
  reduced.dim,
  partition.ncells,
  hmethod,
  N.cluster,
  enpN.cluster,
  indN.cluster,
  minN.cluster,
  maxN.cluster,
  sil.thre,
  height.Ntimes,
  flashmark,
  flag,
  n.cores,
  forview,
  rM,
  rN.seed
)

Arguments

scExp

input single-cell expression matrix

ncells

number of single cells

ensize.K

number of applications of random projection for ensemble

reduced.dim

the dimension to be reduced to

partition.ncells

number of cells for each partition when using SHARP_large

Details

For each partition (or group), the default number of cells is set to 2000 for each group. The users can also set a different number according to the computational capability of their own local computers. The suggested criteria to set this number is that as long as SHARP_small can run in a fast enough (depending on users' requirements) way for the selected number of single cells.

Examples

1
enresults = SHARP_large(scExp, ncells, ensize.K, reduced.dim, partition.ncells)

shibiaowan/SHARP documentation built on April 28, 2021, 1:56 p.m.