library(testthat) knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
To take advantage of parallel tests, add the following line to the DESCRIPTION
:
Config/testthat/parallel: true
You'll also need to be using the 3rd edition:
Config/testthat/edition: 3
Starting a new R process is relatively expensive, so testthat begins by creating a pool of workers.
The size of the pool will be determined by getOption("Ncpus")
, then the TESTTHAT_CPUS
envvar.
If neither are set, then two processes are started.
In any case, testthat will never start more subprocesses than test files.
Each worker begins by loading testthat and the package being tested. It then runs any setup files (so if you have existing setup files you'll need to make sure they work when executed in parallel).
testthat runs test files in parallel. Once the worker pool is initialized, testthat then starts sending test files to workers, by default in alphabetical order: as soon as a subprocess has finished, it receives another file, until all files are done. This means that state is persisted across test files: options are not reset, loaded packages are not unloaded, the global environment is not cleared, etc. You are responsible for making sure each file leaves the world as it finds it.
Because files are run in alphabetical order, you may want to rename your slowest test files so that they start first, e.g. test-1-slowest.R
, test-2-next-slowest.R
, etc.
If tests fail stochastically (i.e. they sometimes work and sometimes fail) you may have accidentally introduced a dependency between your test files. This sort of dependency is hard to track down due to the random nature, and you'll need to check all tests to make sure that they're not accidentally changing global state.
If you use packaged scope test fixtures, you'll need to review them to make sure that they work in parallel. For example, if you were previously creating a temporary database in the test directory, you'd need to instead create it in the session temporary directory so that each process gets its own independent version.
There is some overhead associated with running tests in parallel:
Startup cost is linear in the number of subprocesses, because we need to create them in a loop. This is about 50ms on my laptop. Each subprocess needs to load testthat and the tested package, this happens in parallel, and we cannot do too much about it.
Clean up time is again linear in the number of subprocesses, and it about 80ms per subprocess on my laptop.
It seems that sending a message (i.e. a passing or failing expectation) is about 2ms currently. This is the total cost that includes sending the message, receiving it, and replying it to a non-parallel reporter.
This overhead generally means that if you have many test files that take a short amount of time, you're unlikely to see a huge benefit by using parallel tests. For example, testthat itself takes about 10s to run tests in serial, and 8s to run the tests in parallel.
By default testthat starts the test files in alphabetical order.
If you have a few number of test files that take longer than the rest, then this might not be the best order.
Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file.
You can change the order with the Config/testthat/start-first
option in DESCRIPTION
.
For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
The format is a comma separated list of glob patterns, see ?utils::glob2rx
.
The matching test files will start first.
(The test-
prefix is ignored.)
See default_reporter()
for how testthat selects the default reporter for devtools::test()
and testthat::test_local()
.
In short, testthat selects ProgressReporter
for non-parallel and ParallelProgressReporter
for parallel tests by default.
(Other testthat test functions, like test_check()
, test_file()
, etc. select different reporters by default.)
Most reporters support parallel tests.
If a reporter is passed to devtools::test()
, testthat::test_dir()
, etc. directly, and it does not support parallel tests, then testthat runs the test files sequentially.
Currently the following reporters don't support parallel tests:
DebugReporter
, because it is not currently possible to debug subprocesses.
JunitReporter
, because this reporter records timing information for each test block, and this is currently only available for reporters that support multiple active test files.
(See "Writing parallel reporters" below.)
LocationReporter
because testthat currently does not include location information for successful tests when running in parallel, to minimize messaging between the processes.
StopReporter
, as this is a reporter that testthat uses for interactive expect_that()
calls.
The other built-in reporters all support parallel tests, with some subtle differences:
Reporters that stop after a certain number of failures can only stop at the end of a test file.
Reporters report all information about a file at once, unless they support parallel updates.
E.g.
ProgressReporter
does not update its display until a test file is complete.
The standard output and standard error, i.e. print()
, cat()
, message()
, etc. output from the test files are lost currently.
If you want to use cat()
or message()
for print-debugging test cases, then the best is to temporarily run tests sequentially, by changing the Config
entry in DESCRIPTION
or selecting a non-parallel reporter, e.g. the CheckReporter
:
{.r}
devtools::test(filter = "badtest", reporter = "check")
To support parallel tests, a reporter must be able to function when the test files run in a subprocess.
For example DebugReporter
does not support parallel tests, because it requires direct interaction with the frames in the subprocess.
When running in parallel, testthat does not provide location information (source references) for test successes.
To support parallel tests, a reporter must set self$capabilities$parallel_support
to TRUE
in its initialize()
method:
... initialize = function(...) { super$initialize(...) self$capabilities$parallel_support <- TRUE ... } ...
When running in parallel, testthat runs the reporter in the main process, and relays information between the reporter and the test code transparently. (Currently the reporter does not even know that the tests are running in parallel.)
If a reporter does not support parallel updates (see below), then testthat internally caches all calls to the reporter methods from subprocesses, until a test file is complete.
This is because these reporters are not prepared for running multiple test files concurrently.
Once a test file is complete, testthat calls the reporter's $start_file()
method, relays all $start_test()
, $end_test()
, $add_result()
, etc. calls in the order they came in from the subprocess, and calls $end_file()
.
The ParallelProgressReporter
supports parallel updates.
This means that once a message from a subprocess comes in, the reporter is updated immediately.
For this to work, a reporter must be able to handle multiple test files concurrently.
A reporter declares parallel update support by setting self$capabilities$parallel_updates
to TRUE
:
... initialize = function(...) { super$initialize(...) self$capabilities$parallel_support <- TRUE self$capabilities$parallel_updates <- TRUE ... } ...
For these reporters, testthat does not cache the messages from the subprocesses. Instead, when a message comes in:
It calls the $start_file()
method, letting the reporter know which file the following calls apply to.
This means that the reporter can receive multiple $start_file()
calls for the same file.
Then relays the message from the subprocess, calling the appropriate $start_test()
, $add_result()
, etc. method.
testthat also calls the new $update()
method of the reporter regularly, even if it does not receive any messages from the subprocess.
(Currently aims to do this every 100ms, but there are no guarantees.) The $update()
method may implement a spinner to let the user know that the tests are running.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.