Description Usage Arguments Value See Also Examples
Returns public statuses via one of the following four methods:
1. Sampling a small random sample of all publicly available tweets
2. Filtering via a search-like query (up to 400 keywords)
3. Tracking via vector of user ids (up to 5000 user_ids)
4. Location via geo coordinates (1-360 degree location boxes)
Stream with hardwired reconnection method to ensure timeout integrity.
1 2 3 4 5 6 7 8 9 10 11 |
q |
Query used to select and customize streaming collection
method. There are four possible methods. (1) The default,
|
timeout |
Numeric scalar specifying amount of time, in
seconds, to leave connection open while streaming/capturing
tweets. By default, this is set to 30 seconds. To stream
indefinitely, use |
parse |
Logical, indicating whether to return parsed data. By
default, |
token |
a twitter token. |
file_name |
Character with name of file. By default, a temporary file is created, tweets are parsed and returned to parent environment, and the temporary file is deleted. |
verbose |
Logical, indicating whether or not to include output processing/retrieval messages. |
... |
Insert magical parameters, spell, or potion here. Or
filter for tweets by language, e.g., |
dir |
Name of directory in which json files should be written. The default, NULL, will create a timestamped "stream" folder in the current working directory. If a dir name is provided that does not already exist, one will be created. |
append |
Logical indicating whether to append or overwrite file_name if the file already exists. Defaults to FALSE, meaning this function will overwrite the preexisting file_name (in other words, it will delete any old file with the same name as file_name) meaning the data will be added as new lines to file if pre-existing. |
Tweets data returned as data frame with users data as attribute.
Returns data as expected using original search_tweets function.
https://developer.twitter.com/en/docs/tweets/sample-realtime/api-reference/decahose
Other stream tweets:
parse_stream()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | ## Not run:
## stream tweets mentioning "election" for 90 seconds
e <- stream_tweets("election", timeout = 90)
## data frame where each observation (row) is a different tweet
e
## plot tweet frequency
ts_plot(e, "secs")
## stream tweets mentioning Obama for 30 seconds
djt <- stream_tweets("realdonaldtrump", timeout = 30)
## preview tweets data
djt
## get user IDs of people who mentioned trump
usrs <- users_data(djt)
## lookup users data
usrdat <- lookup_users(unique(usrs$user_id))
## preview users data
usrdat
## store large amount of tweets in files using continuous streams
## by default, stream_tweets() returns a random sample of all tweets
## leave the query field blank for the random sample of all tweets.
stream_tweets(
timeout = (60 * 10),
parse = FALSE,
file_name = "tweets1"
)
stream_tweets(
timeout = (60 * 10),
parse = FALSE,
file_name = "tweets2"
)
## parse tweets at a later time using parse_stream function
tw1 <- parse_stream("tweets1.json")
tw1
tw2 <- parse_stream("tweets2.json")
tw2
## streaming tweets by specifying lat/long coordinates
## stream continental US tweets for 5 minutes
usa <- stream_tweets(
c(-125, 26, -65, 49),
timeout = 300
)
## use lookup_coords() for a shortcut verson of the above code
usa <- stream_tweets(
lookup_coords("usa"),
timeout = 300
)
## stream world tweets for 5 mins, save to JSON file
## shortcut coords note: lookup_coords("world")
world.old <- stream_tweets(
c(-180, -90, 180, 90),
timeout = (60 * 5),
parse = FALSE,
file_name = "world-tweets.json"
)
## read in JSON file
rtworld <- parse_stream("word-tweets.json")
## world data set with with lat lng coords variables
x <- lat_lng(rtworld)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.