.tutorial <- TRUE .solutions <- TRUE begin_sol <- function(sol = .solutions) {if (!sol) {"<!--\n"} else {"\n<hr />**Solution:**\n"}} end_sol <- function(sol = .solutions) {if (!sol) {"-->\n"} else {"\n<hr />\n"}} library(StatCompLab) if (.tutorial) { library(learnr) require(gradethis) learnr::tutorial_options( exercise.timelimit = 120, exercise.checker = grade_learnr, exercise.startover = TRUE, exercise.diagnostics = TRUE, exercise.completion = TRUE # exercise.error.check.code = exercise.error.check.code. ) } library(ggplot2) knitr::opts_chunk$set( echo = FALSE, # dev = "png", # dev.args = list(type = "cairo-png"), fig.width = 6 ) suppressPackageStartupMessages(library(tidyverse)) theme_set(theme_bw()) set.seed(1234L)
In this lab session you will explore
Cross validation
Open your github repository clone project from Lab 2 or 4
.R
file, but working with code chunks in a .Rmd
is recommended.StatCompLab
package to version 21.7.0 or higher.r begin_sol(!.solutions)
The accompanying Tutorial07Solutions
tutorial/vignette documents contain the
solutions explicitly, to make it easier to review the material after the workshops.
r end_sol(!.solutions)
Throughout this tutorial, you'll make use of the data wrangling and plotting
tools in the dplyr
, magrittr
, ggplot2
and other tidyverse packages,
so start your code with library(tidyverse)
and library(StatCompLab)
.
Note that the exercises in this tutorial build on each other, often with one solution being similar to previous solutions, with just a few additions and/or modifications.
Some of the data wrangling functions used in the tutorial are
mutate
to add or alter variablesfilter
to keep only rows fulfilling given criteriagroup_by
to perform analyses within subgroups defined by one or more variablessummarise
to compute data summaries, usually with subgroupsselect
to keep only some variablesleft_join
to join two data frames together, based on common identifier variablespivot_wider
to split a value variable into new named variables based on a
category variablepivot_longer
to gather several variables into a new single value variable, and
the names stored in a new category variable%>%
, the "pipe" operator used to glue a sequence of operations together by feeding
the result of an operation into the first argument of the following function callhead
to view only the first few rows of a data frame. This can be useful when debugging
a sequence of "piped" expressions, by placing it last in the pipe sequencepull
to extract the contents of a single variableFunctions for plotting,
ggplot
for initialising plotsgeom_point
for drawing pointsfacet_wrap
for splitting a plot into "facet" plots, based on one or more
category variablesThe Global Historical Climatology Network at https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily provides historical weather data collected from all over the globe. A subset of the daily resolution data set is available in the StatCompLab package
containing data from eight weather stations in Scotland, covering the time period from 1 January 1960 to 31 December 2018. Some of the measurements are missing, either due to instrument problems or data collection issues. Load the data with
data(ghcnd_stations, package = "StatCompLab") data(ghcnd_values, package = "StatCompLab")
The ghcnd_stations
data frame has 5 variables:
ID
: The identifier code for each stationName
: The humanly readable station nameLatitude
: The latitude of the station location, in degreesLongitude
: The longitude of the station location, in degreesElevation
: The station elevation, in metres above sea levelThe station data set is small enough that you can view the whole thing,
e.g. with knitr::kable(ghcnd_stations)
.
You can try to find some of the locations on a map (google maps an other online map
systems can usually interpret latitude and longitude searches).
r begin_sol()
knitr::kable(ghcnd_stations)
r end_sol()
The ghcnd_values
data frame has 7 variables:
ID
: The station identifier code for each observationYear
: The year the value was measuredMonth
: The month the value was measuredDay
: The day of the month the value was measuredDecYear
: "Decimal year", the measurement date converted to a fractional value,
where whole numbers correspond to 1 January, and fractional values correspond to
later dates within the year. This is useful for both plotting and modelling.Element
: One of "TMIN" (minimum temperature), "TMAX" (maximum temperature), or "PRCP" (precipitation), indicating what each value in the Value
variable represents Value
: Daily measured temperature (in degrees Celsius) or precipitation (in mm)The values data object has r nrow(ghcnd_values)
rows, so we don't want to try
to view the whole object directly. Instead, we can start by summarising it.
Start by counting how many observations each station has, for each type of measurement.
The shortest approach is to use the count()
function. A more generalisable
approach is to use the group_by()
, summarise()
, and n() functions. See the description
on ?count
for how count()
is connected to the others.
To avoid having to create temporary named variables, end the pipe operations with
a call to knitr::kable()
, especially if you're working in an RMarkdown document.
# ghcnd_values %>% # count(???) %>% # knitr::kable() # # ghcnd_values %>% # group_by(???) %>% # ???
ghcnd_values %>% count(ID, Element) %>% knitr::kable()
r begin_sol()
r end_sol()
Before, we only looked at the station data and weather measurements separately. When plotting, we would at least like to have access to the station names instead of the identifying codes, to give a more humanly readable presentation.
This can be accomplished with the left_join()
function, that can add copies of
the rows from one data frame to another, where one or more columns match.
Create a new variable, ghcnd
that for each observation contains both the
measurements and the station data:
ghcnd <- left_join(ghcnd_values, ghcnd_stations, by = "ID") head(ghcnd)
Now plot daily minimum and maximum temperature measurements connected by lines
as a
function of time (DecYear
), with a
different colour for each element, and a separate subplot for each station
(facet_wrap(~variablename)
),
labeled by the station names.
Again, avoid creating a temporary named variable. Instead, feed the initial data
wrangling result into ggplot()
directly.
# OK to have aes() first in the ggplot() call, since the data part is provided via %>% ghcnd %>% filter(???) %>% ggplot(aes(???)) + geom_???() + facet_wrap(???)
ghcnd %>% filter(Element %in% ???) %>% ggplot(aes(DecYear, ???, colour = Element)) + geom_???() + facet_wrap(???)
ghcnd %>% filter(Element %in% c("TMIN", "TMAX")) %>% ggplot(aes(DecYear, Value, colour = Element)) + geom_line() + facet_wrap(~ Name)
r begin_sol()
r end_sol()
Due to the amount of data, it's difficult to see clear patterns here. Produce two figures, one showing the yearly averages of TMIN and TMAX as points, and one showing the monthly seasonal averages (for months 1 through 12) of TMIN and TMAX, separately for each station.
Again, avoid creating a temporary named variable. In the previous code, insert
calls to group_by()
and summarise()
, and modify the x-values in the aesthetics.
ghcnd %>% filter(???) %>% group_by(???) %>% summarise(???, .groups = "drop") %>% ggplot(aes(???)) + geom_???() + facet_wrap(???)
# group by both ID and Name; they identify the same sets, and we want to keep both ghcnd %>% filter(Element %in% ???) %>% group_by(ID, Name, Element, Year) %>% summarise(Value = mean(Value), .groups = "drop") %>% ggplot(aes(Year, ???)) + geom_???() + facet_wrap(???)
ghcnd %>% filter(Element %in% c("TMIN", "TMAX")) %>% group_by(ID, Name, Element, Year) %>% summarise(Value = mean(Value), .groups = "drop") %>% ggplot(aes(Year, Value, colour = Element)) + geom_point() + facet_wrap(~ Name)
ghcnd %>% filter(???) %>% group_by(???) %>% summarise(???, .groups = "drop") %>% ggplot(aes(???)) + geom_???() + facet_wrap(???)
# group by both ID and Name; they identify the same sets, and we want to keep both ghcnd %>% filter(Element %in% ???) %>% group_by(ID, Name, Element, Month) %>% summarise(Value = mean(Value), .groups = "drop") %>% ggplot(aes(Month, ???)) + geom_???() + facet_wrap(???)
ghcnd %>% filter(Element %in% c("TMIN", "TMAX")) %>% group_by(ID, Name, Element, Month) %>% summarise(Value = mean(Value), .groups = "drop") %>% ggplot(aes(Month, Value, colour = Element)) + geom_point() + facet_wrap(~ Name)
What are the common patterns in the yearly values, and in the monthly seasonal values?
r begin_sol()
The yearly averages all seem to have a slight increasing trend:
The monthly seasonal averages clearly show the seasonal pattern. The shapes are similar for all stations, but the average and amplitude varies a bit:
r end_sol()
If we want to do a scatter plot of TMIN and TMAX, we need to rearrange the data a bit.
For this we can use the pivot_wider
function, that can turn a name variable and
a values variable into several named variable. Note that if only some measurement elements
are present on a given day, NA's will be produced by default. Optionally, filter
these rows out before calling ggplot()
.
Draw a scatterplot for daily TMIN vs TMAX for each station, with colour determined by the month.
ghcnd %>% pivot_wider(???) %>% filter(!is.na(???) & ???) %>% ggplot(aes(TMIN, ???)) + geom_???() + facet_wrap(???)
ghcnd %>% pivot_wider(names_from = ???, values_from = ???) %>% filter(!is.na(TMIN) & !is.na(TMAX)) %>% ggplot(aes(TMIN, TMAX, ???)) + geom_???() + facet_wrap(???)
ghcnd %>% pivot_wider(names_from = Element, values_from = Value) %>% filter(!is.na(TMIN) & !is.na(TMAX)) %>% ggplot(aes(TMIN, TMAX, colour = factor(Month))) + geom_point() + facet_wrap(~ Name)
r begin_sol()
r end_sol()
Choose one of the stations, and create a new data variable data
from ghcnd
with the yearly averages of TMIN as a column (as in the previous pivot_wider
output),
with missing values removed with filter()
.
data <- ghcnd %>% filter(???) %>% pivot_wider(???) %>% filter(???) %>% group_by(???) %>% summarise(???)
data <- ghcnd %>% filter(ID == "UKE00105875") %>% pivot_wider(names_from = Element, values_from = Value) %>% filter(???) %>% group_by(ID, Name, Year) %>% summarise(???)
data <- ghcnd %>% filter(ID == "UKE00105875") %>% pivot_wider(names_from = Element, values_from = Value) %>% filter(!is.na(TMIN)) %>% group_by(ID, Name, Year) %>% summarise(TMIN = mean(TMIN), .groups = "drop")
r begin_sol()
r end_sol()
Now, using the whole data
estimate a linear model for TMIN, with lm()
formula
TMIN ~ 1 + Year
, and compute the
average 80% Interval score (use proper_score()
that you used in lab 6)
for prediction intervals for each of the TMIN observations in data
.
See ?predict.lm
for documentation for the predict()
method for models estimated with lm()
.
fit <- lm(???) pred <- predict(???) score0 <- proper_score(???)
fit <- lm(TMIN ~ 1 + Year, data = data) pred <- predict(???) score0 <- proper_score(???)
fit <- lm(TMIN ~ 1 + Year, data = data) pred <- predict(fit, newdata = data, interval = "prediction", level = 0.8) score0 <- proper_score("interval", data$TMIN, ???)
fit0 <- lm(TMIN ~ 1 + Year, data = data) pred0 <- predict(fit0, newdata = data, interval = "prediction", level = 0.8) score0 <- mean(proper_score( "interval", data$TMIN, lwr = pred0[, "lwr"], upr = pred0[,"upr"], alpha = 0.8))
r begin_sol()
r end_sol()
We now want to compute the 5 average 80% Interval scores from 5-fold cross validation based on a random partition of the data into 5 approximately equal parts.
First add a new column Group to data defining the partitioning, using mutate()
.
One approach is to compute a random permutation index vector,
and then use the modulus operator %%
to reduce it to 5 values, or ceiling()
on scaled indices.
data <- data %>% mutate(Group = ???, # Make a permuted index variable Group = ???) # Convert the indices to 5 group index values
data <- data %>% mutate(Group = sample(???, size = ???, replace = ???), Group = ???)
data <- data %>% mutate(Group = sample(seq_len(nrow(data)), size = nrow(data), replace = FALSE), Group = ???
data <- data %>% mutate(Group = sample(seq_len(nrow(data)), size = nrow(data), replace = FALSE), Group = (Group %% 5) + 1) # Alternative: # data <- # data %>% # mutate(Group = sample(seq_len(nrow(data)), size = nrow(data), replace = FALSE), # Group = ceiling(Group / nrow(data) * 5))
r begin_sol()
r end_sol()
Then loop over the partition groups, estimating the model leaving the group out, and then predicting and scoring predictions for the group.
scores <- numeric(5) for (grp in seq_len(5)) { fit <- lm(TMIN ~ 1 + Year, data = data %>% ???) pred <- predict(fit, newdata = ???, interval = "prediction", level = 0.8) scores[grp] <- mean(proper_score( "interval", ???, lwr = pred[, "lwr"], upr = pred[,"upr"], alpha = 0.8)) }
scores <- numeric(5) for (grp in seq_len(5)) { fit <- lm(TMIN ~ 1 + Year, data = data %>% filter(Group != grp)) pred <- predict(fit, newdata = ???, interval = "prediction", level = 0.8) scores[grp] <- mean(proper_score( "interval", ???, lwr = pred[, "lwr"], upr = pred[,"upr"], alpha = 0.8)) }
scores <- numeric(5) for (grp in seq_len(5)) { fit <- lm(TMIN ~ 1 + Year, data = data %>% filter(Group != grp)) pred <- predict(fit, newdata = data %>% filter(Group == grp), interval = "prediction", level = 0.8) scores[grp] <- mean(proper_score( "interval", ???, lwr = pred[, "lwr"], upr = pred[,"upr"], alpha = 0.8)) }
scores <- numeric(5) for (grp in seq_len(5)) { fit <- lm(TMIN ~ 1 + Year, data = data %>% filter(Group != grp)) pred <- predict(fit, newdata = data %>% filter(Group == grp), interval = "prediction", level = 0.8) scores[grp] <- mean(proper_score( "interval", (data %>% filter(Group == grp)) %>% pull("TMIN"), lwr = pred[, "lwr"], upr = pred[,"upr"], alpha = 0.8)) }
Compare the resulting scores with the one based on the whole data set. Is the average cross validation score larger or smaller?
r begin_sol()
The average cross validation score for this problem is usually larger than the one for the whole data set; it's is intended to reduce the risk of underestimating the prediction error, so this is what we would expect. Repeat the random group allocation to see how much it influences the cross validation score average.
knitr::kable(data.frame("Whole data score" = score0, "Cross validation score" = mean(scores)))
r end_sol()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.