EPM_job_split | R Documentation |
Assess the number of PubMed records expected from a user-provided query and split the job in multiple sub-queries if the number is bigger than "max_records_per_batch" (typically, n=10,000). Sub-queries are split according to the "Create Date" of PubMed records. This does not support splitting jobs returning more than "max_records_per_batch" (typically, n=10,000) records that have the same "Create Date" (i.e., "[CRDT]").
EPM_job_split(
query_string,
api_key = NULL,
max_records_per_batch = 9999,
verbose = FALSE
)
query_string |
String (character vector of length 1), corresponding to the query string. |
api_key |
String (character vector of length 1), corresponding to the NCBI API key. Can be NULL. |
max_records_per_batch |
Integer, maximum number of records that should be expected be sub-query. This number should be in the range 1,000 to 10,000 (typicall, max_records_per_batch=10,000). |
verbose |
logical, shall progress information be printed to console. |
Character vector including the response from the server.
Damiano Fantini, damiano.fantini@gmail.com
https://www.data-pulse.com/dev_site/easypubmed/
# Note: a time limit can be set in order to kill the operation when/if
# the NCBI/Entrez server becomes unresponsive.
setTimeLimit(elapsed = 4.9)
try({
qry <- 'Damiano Fantini[AU] AND "2018"[PDAT]'
easyPubMed:::EPM_job_split(query_string = qry, verbose = TRUE)
}, silent = TRUE)
setTimeLimit(elapsed = Inf)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.