View source: R/main_scraping_helpers.R
read_page | R Documentation |
A simple HTML page reader with debouncer This is a stupidly simple function to get around rate-limit or anti-crawling measures by dynamically waiting between each request for a set time.
It's heuristic, and may still return a try-error that must be handled downstream. Ideally I'd use OOP and handle this at the app level instead of the request level.
read_page(page_url, max_wait = 30)
page_url |
(character) A URL we want to read |
max_wait |
(numeric) Maximum debounce |
max_wait
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.