View source: R/Collect.listing.reddit.R
Collect.listing.reddit | R Documentation |
Collects thread listings for one or more specified subreddits and structures the data into a dataframe.
## S3 method for class 'listing.reddit'
Collect(
credential,
endpoint,
subreddits,
sort = "hot",
period = "all",
max = 25,
waitTime = c(6, 8),
ua = getOption("HTTPUserAgent"),
writeToFile = FALSE,
verbose = FALSE,
...
)
collect_reddit_listings(
subreddits,
sort = "new",
period = NULL,
max = 25,
waitTime = c(6, 8),
ua = vsml_ua(),
writeToFile = FALSE,
verbose = FALSE,
...
)
credential |
A |
endpoint |
API endpoint. |
subreddits |
Character vector. Subreddit names to collect thread listings from. |
sort |
Character vector. Listing thread sort order. Options are |
period |
Character vector. Listing top threads by time period. Only applicable to sort order by |
max |
Numeric vector. Maximum number of threads in listing to return. Default is |
waitTime |
Numeric vector. Time range in seconds to select random wait from in-between url collection requests.
Minimum is 3 seconds. Default is |
ua |
Character string. Override User-Agent string to use in Reddit thread requests. Default is
|
writeToFile |
Logical. Write collected data to file. Default is |
verbose |
Logical. Output additional information about the data collection. Default is |
... |
Additional parameters passed to function. Not used in this method. |
A tibble
object with class names "listing"
and "reddit"
.
The reddit endpoint used for collection has maximum limit of 25 per listing.
## Not run:
# subreddit url to collect threads from
subreddits <- c("datascience")
redditListing <- redditAuth |>
Collect(endpoint = "listing", subreddits = subreddits, sort = "new", writeToFile = TRUE)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.