knitr::opts_chunk$set(echo = TRUE) library(knitr) opts_chunk$set(tidy.opts=list(width.cutoff=50),tidy=TRUE)
This appendix describes how to extract data on storm disturbances by county across time. These storm occurrences can be accessed in the BBS.occurrences
library using data('Storm.data')
.
Download the NOAA storm dataset and unzip into a .csv
file
if(!file.exists("StormData.csv")) { url <- "https://d396qusza40orc.cloudfront.net/repdata%2Fdata%2FStormData.csv.bz2" download.file(url= url, destfile = "StormData.csv.bz2") R.utils::bunzip2(filename = "StormData.csv.bz2", destname = "StormData.csv", remove = TRUE) }
Read in the downloaded .csv
file
Storm.data.raw <- data.table::fread('StormData.csv')
Because the dataset has over 900,000 rows in its original form, we process the data by categorising storm events into recognised categories, cleaning state and county names and then returning data only on the most damaging storms (controlled by the argument damage.threshold
). Note, this function also only returns data on weather events that are likely to cause widespread disturbances to natural communities (including thunderstorm, heave winds, hail, tornados, heave rain, fire, drought, and tropical storms). Here, we return the top 33% of the most damaging storms that fall into these categories
library(BBS.occurrences) Storm.data <- cleanStormData(Stormdata.raw = Storm.data.raw, damage.threshold = .33)
Save the processed storm data in the Analysis
folder. Note, these storm occurrences can be accessed in the BBS.occurrences
library using data('Storm.data')
.
save(Storm.data, file = './Analysis_data/Storm.data.Rdata')
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.