Nothing
ClusterFunctionsMulticore
.addExperiments()
in combination with combination method "bind"
and repls > 1 where experiments have been duplicated.addExperiments()
now also accepts a vector of replications (instead of a single scalar value) for argument repls
.ClusterFunctionsSlurm
.waitForJobs()
batchMap()
now supports unnamed more.args
.delayedAssign()
.data.table
from Depends
to Imports
.
User scripts might need to explicitly attach data.table
via library()
now.ClusterFunctionsMulticore
.system2()
for R-devel (to be released as R-4.0.0).compress
to select the compression algorithm (passed down to saveRDS()
).chunkIds()
.fs.timeout
in the cluster function constructor is 0
(was NA
before).findConfFile()
and findTemplateFile()
.TMPDIR
instead of the R session's temporary directory.fs
is now used internally for all file system operations.getStatus()
now includes a time stamp.chunk()
now optionally shuffles the ids before chunking.submitJobs()
.blas.threads
and omp.threads
.assertRegistry()
.unwrap()
as alias to flatten()
.
The latter causes a name clash with package purrr
and will be deprecated in a future version.waitForJobs()
.foreach
is now supported for nested parallelization as an alternative to parallelMap
.flatten()
to manually unnest/unwrap lists in data frames.getProblemIds()
and getAlgorithmIds()
.
Instead, you can just access reg$problems
or reg$algorithms
, respectively.loadRegistry()
.ExperimentRegistry
.waitForJobs()
and submitJobs()
can now be set via the configuration file.waitForJobs()
has been reworked to allow control over the heuristic to detect expired jobs.
Jobs are treated as expired if they have been submitted but are not detected on the system for expire.after
iterations
(default 3 iterations, before 1 iteration).writeable
for loadRegistry()
to allow loading registries explicitly as read-only.update.paths
from loadRegistry()
.
Paths are always updated, but the registry on the file system remains unchanged unless loaded in read-write mode.ClusterFunctionsSlurm
now come with an experimental nodename argument. If set, all communication with the master is
handled via SSH which effectively allows you to submit jobs from your local machine instead of the head node.
Note that mounting the file system (e.g., via SSHFS) is mandatory.file.dir
with special chars like whitespace.findExperiments()
(argument ids
is now first).addExperiments()
now warns if a design is passed as data.frame
with factor columns and stringsAsFactors
is TRUE
.setJobNames()
and getJobNames()
to control the name of jobs on batch systems.
Templates should be adapted to use job.name
instead of job.hash
for naming.flatten
of getJobResources()
, getJobPars()
and getJobTable()
is deprecated and will be removed.
Future versions of the functions will behave like flatten
is set to FALSE
explicitly.
Single resources/parameters must be extracted manually (or with tidyr::unnest()
).findStarted()
, findNotStarted()
and getStatus()
.findExperiments()
now performs an exact string match (instead of matching substrings) for patterns specified via prob.name
and algo.name
.
For substring matching, use prob.pattern
or algo.pattern
, respectively.reduceResultsDataTable()
fill
, now is always TRUE
flatten
to control if the result should be represented as a column of lists or flattened as separate columns.
Defaults to a backward-compatible heuristic, similar to getJobPars
.n.array.jobs
has been removed from JobCollection
in favor of the new variable array.jobs
(logical).findExperiments()
now has two additional arguments to match using regular expressions.
The possibility to prefix a string with "~" to enable regular expression matching has been removed.batchReduce()
.estimateRuntimes()
.removeRegistry()
.missing.val
has been added to reduceResultsList()
and reduceResultsDataTable()
and removed from loadResult()
and batchMapResults()
.makeClusterFunctionsTorque
which now must be called via makeClusterFunctionsTORQUE()
chunkIds()
has been deprecated. Use chunk()
, lpt()
or binpack()
instead.ClusterFunctionsLSF
and ClusterFunctionsOpenLava
(thanks to @phaverty).NULL
results in reduceResultsList()
getJobTable()
returned difftimes
with the wrong unit (e.g., in minutes instead of seconds).ClusterFunctionsDocker
.Initial CRAN release. See the vignette for a brief comparison with BatchJobs/BatchExperiments.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.