knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
library(dplyr)
library(webhoser)

Setup

Let's walk you through webhoser with a simple use case: we'll get articles and blog posts about the R programming language. But first, we need to authenticate, head over to webhose.io if you do not have an account yet. The aforementioned account will give you an API Key, this is what you will use in the function below.

wh_token("xXX-x0X0xX0X-00X")

Get

Great, now that we have a token, we can fetch some articles. We'll use some basic boolean search arguments and filters to narrow down the search to english and news (excludes blogs). We can expect the number of articles about the R Programming Language to be relatively low so we'll extend the date range to make sure we get a decent number of articles (functions default to the last 3 days), we'll also limit the search to blogs as this is where we'd expect R to be mentioned.

rstats <- wh_news(
    q = '"R programming" is_first:true language:english site_type:blogs',
    ts = (Sys.time() - (30 * 24 * 60 * 60))
  ) 

Note that with the free version you are limited to 30 days of results

webhose.io gives 1,000 results per month for free, wh_news and wh_broadcasts print the number of queries left by default.

Paginate

We can paginate through the results if we want more data; let's get five more pages.

rstats_pagingated <- rstats %>% 
  wh_paginate(5)

class(rstats_pagingated)

So we fetched one page of results then using that object we paginated through 5 more pages. However, this returns an object of class webhoser. The wh_collect function takes another arguement flatten which defaults to FALSE.

  1. If TRUE the function attempts to flatten the results into a standard data.frame.
  2. If FALSE the function returns a nested data.frame.
rstats_df <- wh_collect(rstats_pagingated)
class(rstats_df)

The above functions can be summed up as:

rstats <- wh_news(
    q = '"R programming" is_first:true language:english site_type:blogs',
    ts = (Sys.time() - (30 * 24 * 60 * 60))
  ) %>% 
  wh_paginate(5) %>% 
  wh_collect()
rstats <- rstats_df

This returned r nrow(rstats) articles with lot of variables.

names(rstats)

Below we use g2r for a quick plot of the number of articles published by day.

library(dplyr)
# remotes::install_github("JohnCoene/g2r)
library(g2r)

rstats %>% 
  mutate(
    date = as.Date(rstats$published, "%Y-%m-%d")
  ) %>% 
  count(date) %>% 
  g2(asp(date, n, color = "#247BA0")) %>% 
  fig_line()

webhoser.io does some entity extraction for us, so we do not have to do it ourselves, the data.frame includes entitities mentioned in the body of the text along with the sentiment associated with it:

  1. Locations
  2. Persons
  3. Organisations

This, however, is only so accurate depending on the articles you fetch.

webhoser.io also lets you fetch broadcasts transcripts, happy text mining!



news-r/webhoser documentation built on Aug. 1, 2019, 5:50 p.m.