This variation on
req_perform() performs multiple requests in parallel.
req_perform() it always succeeds; it will never throw an error.
Instead it will return error objects, which are your responsibility to
Exercise caution when using this function; it's easy to pummel a server with many simultaneous requests. Only use it with hosts designed to serve many files at once.
A list of requests.
An optional list of paths, if you want to download the request
bodies to disks. If supplied, must be the same length as
Optionally, a curl pool made by
Should all pending requests be cancelled when you
hit an error. Set this to
A list the same length as
reqs where each element is either a
response or an
Will not retrieve a new OAuth token if it expires part way through the requests.
Does not perform throttling with
Does not attempt retries as described by
Consults the cache set by
req_cache() before/after all requests.
In general, where
req_perform() might make multiple requests due to retries
or OAuth failures,
multi_req_perform() will make only make 1.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# Requesting these 4 pages one at a time would take four seconds: reqs <- list( request("https://httpbin.org/delay/1"), request("https://httpbin.org/delay/1"), request("https://httpbin.org/delay/1"), request("https://httpbin.org/delay/1") ) # But it's much faster if you request in parallel system.time(resps <- multi_req_perform(reqs)) reqs <- list( request("https://httpbin.org/status/200"), request("https://httpbin.org/status/400"), request("FAILURE") ) # multi_req_perform() will always succeed resps <- multi_req_perform(reqs) # you'll need to inspect the results to figure out which requests fails fail <- vapply(resps, inherits, "error", FUN.VALUE = logical(1)) resps[fail]
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.