fetch_in_chunks: Downloads abstracts and Metadata from Pubmed, storing as R...

Description Usage Arguments Details Value Examples

Description

Splits large id vectors into a list of smaller chunks, so as not to hammer the entrez server!

Usage

1
fetch_in_chunks(ids, chunk_size = 500, delay = 0, ...)

Arguments

ids

integer Pubmed ID's to get abstracts and metadata from

chunk_size

Number of articles to be pulled with each call to pubmed_fetch (optional)

delay

Integer Number of hours to wait before downloading starts

...

character Additional terms to add to the request

Details

If you are making large bulk downloads, consider setting a delay so the downloading starts at off-peak USA times.

Value

list containing abstratcs and metadata for each ID

Examples

1
2
3
4
5
6
## Not run: 
 # Get IDs via rentrez_search:
 plasticity_ids <- entrez_search("pubmed", "phenotypic plasticity", retmax = 2600)$ids
 plasticity_records <- fetch_in_chunks(plasticity_ids)

## End(Not run)

rOpenHealth/rpubmed documentation built on May 26, 2019, 8:51 p.m.