Use multiple cores on local Linux machine to spawn parallel jobs.


Jobs are spawned by starting multiple R sessions on the commandline (similar like on true batch systems). Packages parallel or multicore are not used in any way.


makeClusterFunctionsMulticore(ncpus = max(getOption("mc.cores", detectCores())
  - 1, 1),, max.load, nice, r.options = c("--no-save",
  "--no-restore", "--no-init-file", "--no-site-file"), script)



Number of VPUs of worker. Default is to use all cores but one, where total number of cores "available" is given by option mc.cores and if that is not set it is inferred by detectCores.

Maximal number of jobs that can run concurrently for the current registry. Default is ncpus.


Load average (of the last 5 min) at which the worker is considered occupied, so that no job can be submitted. Default is ncpus-1.


Process priority to run R with set via nice. Integers between -20 and 19 are allowed. If missing, processes are not nice'd and the system default applies (usually 0).


[character] Options for R and Rscript, one option per element of the vector, a la “–vanilla”. Default is c("--no-save", "--no-restore", "--no-init-file", "--no-site-file").


Path to helper bash script which interacts with the worker. You really should not have to touch this, as this would imply that we have screwed up and published an incompatible version for your system. This option is only provided as a last resort for very experienced hackers. Note that the path has to be absolute. This is what is done in the package: Default means to take it from package directory.



See Also

Other clusterFunctions: ClusterFunctions, makeClusterFunctions; makeClusterFunctionsInteractive; makeClusterFunctionsLSF; makeClusterFunctionsLocal; makeClusterFunctionsSGE; makeClusterFunctionsSLURM; makeClusterFunctionsSSH; makeClusterFunctionsTorque

Questions? Problems? Suggestions? or email at

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.