View source: R/paths_allowed.R
check if a bot has permissions to access page(s)
1 2 3 4 |
paths |
paths for which to check bot's permission, defaults to "/" |
domain |
Domain for which paths should be checked. Defaults to "auto". If set to "auto" function will try to guess the domain by parsing the paths argument. Note however, that these are educated guesses which might utterly fail. To be on the save side, provide appropriate domains manually. |
bot |
name of the bot, defaults to "*" |
user_agent |
HTTP user-agent string to be used to retrieve robots.txt file from domain |
check_method |
at the moment only kept for backward compatibility reasons - do not use parameter anymore –> will let the function simply use the default |
warn |
warn about being unable to download domain/robots.txt because of |
force |
if TRUE instead of using possible cached results the function will re-download the robotstxt file HTTP response status 404. If this happens, |
ssl_verifypeer |
analog to CURL option https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYPEER.html – and might help with robots.txt file retrieval in some cases |
use_futures |
Should future::future_lapply be used for possible parallel/async retrieval or not. Note: check out help pages and vignettes of package future on how to set up plans for future execution because the robotstxt package does not do it on its own. |
robotstxt_list |
either NULL – the default – or a list of character vectors with one vector per path to check |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.