fetch_in_chunks: Downloads abstracts and Metadata from Pubmed, storing as R...

Description Usage Arguments Value Examples

View source: R/rpubmed_fetch.R

Description

Downloads abstracts and Metadata from Pubmed, storing as R objects Splits large id vectors into a list of smaller chunks, so as not to hammer the entrez server! If you are making large bulk downloads, consider setting a delay so the downloading starts at off-peak USA times.

Usage

1
  fetch_in_chunks(ids, chunk_size = 500, delay = 0, ...)

Arguments

ids

integer Pubmed ID's to get abstracts and metadata from

chunk_size

Number of articles to be pulled with each call to pubmed_fetch (optional)

delay

Integer Number of hours to wait before downloading starts

...

character Additional terms to add to the request

Value

list containing abstratcs and metadata for each ID

Examples

1
2
3
4
5
6
## Not run: 
 # Get IDs via rentrez_search:
 plasticity_ids <- entrez_search("pubmed", "phenotypic plasticity", retmax = 2600)$ids
 plasticity_records <- fetch_in_chunks(plasticity_ids)

## End(Not run)

rpubmed documentation built on May 2, 2019, 5:25 p.m.