estimate_local_resources: Simple resource estimator (BETA)

Description Usage Arguments Value

View source: R/estimate_resources.R

Description

Estimate number of cores and available memory per core for the current system. Memory estimation is still in BETA, so if you can do better please show me how! (https://github.com/ytoren/simscaleR/issues)

Usage

1
2
3
4
5
6
7
8
estimate_local_resources(
  do_gc = TRUE,
  logical = FALSE,
  headless = FALSE,
  overhead_factor = 0.05,
  min_block_fraction = 0.05,
  verbose = FALSE
)

Arguments

do_gc

Logical. Should we run garbage collection before estimating? Default is TRUE.

logical

Logical. Should we count logical CPUs or physical CPUs. Depending on your hardware / OS you might want to turn this on or off (in some cases using virtual cores will reduce performance)

headless

Logical. Should we use all available CPUs or save one to keep the system interactive? Default is FALSE.

overhead_factor

A number between (0,1). What fraction of total memory should be reserved for cluster memory overhead? Since any cluster has some memory overhead, we can't just divide all available memory between cores (we either run out of memory or start an slow & expensive writing to disk). Default value is 0.05 which means 5 If performance slows when you increase number of cores (or you can see intensive I/O activity) consider increasing this parameter. This will break down the calculation to smaller chunks.

min_block_fraction

A number between (0,1). What is the minimum fraction of memory that still allows block calculations? If we establish that per-block memory is below this fraction the recommendation will be to default to loop calculations. Default is 0.05. On headless systems with multiple cores and lots of memory and the recommendation is to loop, you might want to set this value to a lower threshold (or 0 to disable this completely)

verbose

Logical. Should informative messages be printed along the way?

Value

A list with:


ytoren/simscaleR documentation built on April 17, 2021, 12:32 p.m.