sanders

Web-scraping and Web-crawling Content Parsing, Validation and Sanitization Helpers

Description

When researchers crawl/scrape the web for content they are talking to strange computers running software created by a diverse array of humans. Content -- even content that is supposed to adhere to internet stadards -- can have very rough edges that need to be smoothed out to be useful and potentially less harmful to the systems running the scraping and analysis code. Methods are provided that sand off the rough edges of many different types of scraped content and metadata.

The following functions are implemented:

Installation

devtools::install_github("hrbrmstr/sanders")
options(width=120)

Usage

library(sanders)

# current verison
packageVersion("sanders")

content_type <- c("text/html; charset=utf-8", "text/css",
                  "text/javascript; charset=UTF-8", "text/javascript",
                  "application/x-javascript", "text/plain; charset=utf-8")

x <- normalize_media_types(content_type)

x

x$params

Test Results

library(sanders)
library(testthat)

date()

test_dir("tests/")


hrbrmstr/sanders documentation built on May 30, 2019, 4:35 p.m.