#opts_chunk$set(warning=F, message=F) # echo activity solutions? solutions=F
R
Vector data sources represent geographic features as geometric shapes. Different kinds of shapes are used to represent different kinds of features:
In addition to geometry, shapes usually have a variety of associated data called attributes. For example a point representing a United States address would probably have street, city, state, and zip code attributes in addition to its geographic coordinates.
Probably the most common vector format is text-based XYZ point coordinates. In this format each line corresponds to one point, usually with each coordinate separated by a comma. This format is called comma-separated values (CSV) or, more generally, delimited text.
x1,y1,z1 x2,y2,z2 x3,y3,z3
The Z coordinate is sometimes omitted, for example when points are identified only by longitude and latitude. Very often the Z coordinate is re-purposed to record information about the point, such as a measurement that occurred there.
An advantage of this format is that it is human readable and highly portable. Virtually all computers and software can read and write it. It is also easily manipulated using UNIX shell utilities. A disadvantage is that it is inefficient for large data sets and can only represent point geometry. Another disadvantage is its very limited support for metadata, which is typically limited to column names.
Another very common vector format is ESRI Shapefile. This open source format originated in the 1990s with ArcView GIS. Actually the term shapefile is a misnomer: a shapefile is a collection of several files:
Shapefiles are capable of storing points, lines, and polygons, and up to 255 attributes per feature. Attribute names are limited to 10 characters. Shapefiles have good support for metadata.
Shapefiles are generally not human readable but they are portable and well supported by GIS software, making them a good choice for storing vector data. One limitation is that each shapefile can only contain one kind of geometry: points can't be stored with lines, for example, so you need separate files for each geometry type. In addition, line and polygons are internally stored as collections of points instead of mathematical curves (splines), which can result in large files.
R
Shapefiles are opened in R
with the rgdal
package.
library(rgdal) library(sf) library(tidyverse)
sf
routines for opening and inspecting shapefiles don't directly accept a shapefile path argument. Instead they accept a data source name (dsn) and layer name. The data source name is the directory containing the shapefile. The layer name is the name of the shapefile without the file extension (.shp).
You can query a data source for a list of available layers with the st_layers
function.
st_layers(dsn="ncshape") layers <- st_layers(dsn="ncshape")
To obtain detailed information about a layer, call ogrInfo
from rgdal
with the data source and layer name.
ogrInfo(dsn = "ncshape", layer = "firestations") firestations.info <- ogrInfo("ncshape", "firestations")
The firestations
layer contains locations and attributes of fire stations in Wake County, NC. The "Feature type" line indicates that this layer has point geometry. The "Driver" line shows that the layer contains r firestations.info$nrows
points. The "Fields" table shows that each point has r firestations.info$nitems
attributes.
Shapefiles are opened in R
with st_read
.
firestations <- st_read(dsn = "ncshape", layer = "firestations")
The type of object returned by st_read
depends on the layer geometry. It will contain POINTS
, POLYGONS
, MULTIPOLYGONS
, etc. For example, we can determine the number of fire stations in each Wake County city. This example uses dplyr
to count the number of firestations in each city.
firestations %>% group_by(CITY) %>% summarize(count = n()) -> num_fs print(num_fs)
Notice that the resulting data frame still has the geometries. We can plot this with ggplot
.
ggplot(num_fs, aes(x = count, y = CITY)) + geom_point()
In this activity we will plot the number of elementary schools, middle schools, and high schools in Wake County cities. Open the schools_wake
layer in the ncshape
data source and create a table
of schools in cities. Coerce the table to a data frame with column names city
, level
, and count
, and use subset
to obtain those rows with nonzero count
and level
"E", "M", or "H". Create dotplot
s and barchart
s of the data, experimenting with conditioning and grouping. Which plots are most informative?
Functions: sf::st_read
, dplyr::count
, dplyr::filter
, ggplot2::ggplot
, ggplot2::geom_point
, ggplot2::facet_wrap
schools.wake <- st_read(dsn="ncshape", layer="schools_wake", quiet = TRUE) schools.wake %>% count(ADDRCITY, GLEVEL) %>% filter(n > 0, GLEVEL %in% c("E","M","H")) %>% ggplot(aes(x = n, y = ADDRCITY)) + geom_point() + facet_wrap(~ GLEVEL) + xlab("Count") + ylab("City")
The spatial data frames returned by st_read
define methods for ggplot. We can subset
the polygons in the boundary_county
layer to plot an outline of Wake County:
boundary.county <- st_read("ncshape", "boundary_county", quiet = TRUE) wake <- subset(boundary.county, NAME=="WAKE") p1 <- ggplot(wake) + geom_sf() + theme_bw() print(p1)
The firestations
point layer can be added to the plot by simply appending a new layer and printing.
p1 <- p1 + geom_sf(data = firestations) print(p1)
Similarly, the roadsmajor
line layer can be added by appending a new ggplot layer:
roadsmajor <- st_read("ncshape", "roadsmajor", quiet = TRUE) p1 <- p1 + geom_sf(data = roadsmajor) print(p1)
Use the nc_state
, hospitals
, and railroads
layers of the ncshape
data source to create a map of North Carolina hospitals and railroads. Note that there are times when geom_sf
is very slow. I have filled a bug report. It may take 5 minutes or more to actually draw the plot.
Functions: st_read
, ggplot
, geom_sf
nc_state <- st_read(dsn = "ncshape", layer = "nc_state", quiet = T) railroads <- st_read(dsn = "ncshape", layer = "railroads", quiet = T) hospitals <- st_read(dsn = "ncshape", layer = "hospitals", quiet = T) ggplot(nc_state) + geom_sf() + geom_sf(data = railroads) + geom_sf(data = hospitals)
While some maps are designed only to display shape and location, others are designed to highlight a theme connected to a geographic area. Such maps are called thematic maps, and they can be plotted in R
using ggplot
. Here we color Wake County zipcodes with a unique color for each city.
zipcodes.wake <- st_read(dsn = "ncshape", layer = "zipcodes_wake", quiet = T) ggplot(zipcodes.wake) + geom_sf(aes(fill = NAME))
A map projection is a planar representation of Earth's curved surface. While it is usually sufficient to treat Earth's surface as flat over small geographic areas, over large areas the distortions introduced by flattening a curved surface become significant. Projections distort areas, shapes, distances, directions, and other properties of Earth's surface. A map projection represents a particular trade-off among these distortions. Some preserve area at the expense of shape (equal-area projections), while others preserve shape at the expense of area (conformal projections). It is impossible to preserve both.
For these reasons, there is no single best projection. A projection must be chosen to suit the particular application, so very often when combining geospatial data from different sources you will find that different projections were used. In this case it is necessary to transform one projection to another in a process called reprojection. Since vector data sources are based on mathematical objects, reprojection is straightforward.
We can obtain shapefile projection details with the st_crs
function:
countries <- st_read(dsn = "extra", layer = "ne_110m_admin_0_countries", quiet = T) st_crs(countries) ggplot(countries) + geom_sf(aes(fill = subregion))
Reprojection is done with st_transform
:
countries <- st_transform(countries, st_crs("+proj=moll")) ggplot(countries) + geom_sf(aes(fill = subregion))
One of the greatest advantages of vector data sources is their suitability for geometric operations. Here we will highlight several useful operations provided by the sf
package. These operate on and return sf
or sfc
objects. While they accept the spatial data frames returned from st_read
as inputs, the attributes are discarded and only the geometries are returned.
sf
offers st_union
for merging intersecting geometries. The boundary.county
layer contains r nrow(boundary.county)
polygons representing portions of r nlevels(boundary.county$NAME)
counties. We can use st_union
in combination with dplyr (filter
, group_by
, summarize
) to merge the polygons of individual counties, producing a sf
object with one polygon per county.
counties <- boundary.county %>% filter(!is.na(NAME)) %>% group_by(NAME) %>% summarize(geometry = st_union(geometry)) ggplot(counties) + geom_sf(aes(fill = NAME)) + guides(fill = FALSE)
In this activity we will group North Carolina counties longitudinally into four regions and merge each region into a single polygon. One approach is to use the st_coordinates
function to get a vector of county centers. This vector of x coordinates can be converted to a vector of factor levels using cut
. After a new sf
is created from the factors, st_union
can merge the polygons of each region.
Functions: st_centroid
, st_coordinates
, cut
, group_by
, summarize
, st_union
x <- st_coordinates(st_centroid(counties)) regions <- cut(x[,1], 4, labels=1:4, include.lowest=T) counties$region <- regions counties.region <- counties %>% group_by(region) %>% summarize(geometry = st_union(geometry)) ggplot(counties.region) + geom_sf(aes(fill = region))
The st_centroid
routine calculates the centroid of the given geometry.
county.centers <- st_centroid(counties) ggplot(counties) + geom_sf(aes(fill = NAME)) + geom_sf(data = county.centers) + guides(fill = FALSE)
Often the space surrounding a feature is of as much interest as the feature itself, particularly when intersecting overlaying points and lines on another dataset. We can use the gBuffer
function to expand a geometry to include the area within a specified width. As with gCentroid
, the default behavior is to create a buffer around the entire geometry, but by passing byid=TRUE
we can calculate a buffer for individual features.
ggplot(nc_state) + geom_sf() + geom_sf(data = hospitals, color = "steelblue")
Now use st_buffer
to create a 10 mile buffer around each hospital:
hospitals.buffer <- st_buffer(hospitals, 16000) # 16km is approx 10mi ggplot(nc_state) + geom_sf() + geom_sf(data = hospitals.buffer) + geom_sf(data = hospitals, color = "steelblue")
In this activity we will find the proportion of Wake County within 5km of a firestation using st_buffer
, st_distance
, and st_area
.
Functions: st_buffer
, st_combine
, ggplot
, st_area
, st_difference
firestations.buffer <- st_buffer(st_combine(firestations), 5000) ggplot(wake) + geom_sf() + geom_sf(data = firestations.buffer) + geom_sf(data = firestations) uncovered.area <- st_area(st_difference(wake, firestations.buffer)) total.area <- st_area(wake) (total.area - uncovered.area) / (total.area)
In sf
it is extremely easy to join tables. sf
overrides the []
indexing function when indexing with another sf
table. Here is an example of joining two tables. It adds the columns from the boundary.county
table to the hospitals
table. We can use that to give a county name to each hosptial. We extract only the NAME
column from the boundary.county
table in this example.
hospitals.joined <- hospitals[boundary.county$NAME, ] table(hospitals.joined$COUNTY)
Note the presence of zeros in the table: the inner join operation returned the NAME
attribute of every county in the layer, including those without a hospital. We can use the table to create a thematic map.
Compute and plot the number of hospitals in each county in North Carolina. Hint: you can do an inner join between the boundary.county
table and the hosptials
table to get a separate row for each hospital-county combination.
Functions: st_join
, group_by
, summarize
, n
, st_simplify
, ggplot
, geom_sf
n_hosp <- boundary.county[hospitals,] %>% group_by(NAME) %>% summarize(num_hospitals = n()) %>% st_simplify(TRUE, 1e3) ggplot(n_hosp) + geom_sf(aes(fill = num_hospitals))
Spatial interpolation is the process of using the values of a continuous variable (e.g. temperature) at a set of sample points to estimate the value at every other point in a region of interest. The basic idea is to use a weighted average of the values at observed (sampled) points to estimate the values at unobserved points. All spatial interpolation methods are based on Tobler's First Law of Geography:
Everything is related to everything else, but near things are more related than distant things.
We will demonstrate spatial interpolation with the gstat
package and Meuse data set.
library(sf) library(sp) library(stars) library(gstat) library(ggplot2) data(meuse) data(meuse.grid) data(meuse.riv) ```` The first step is converting the data set to `sp` classes. ```r coordinates(meuse) <- ~x+y proj4string(meuse) <- CRS("+init=epsg:28992") coordinates(meuse.grid) <- ~x+y proj4string(meuse.grid) <- CRS("+init=epsg:28992") meuse.river <- SpatialPolygons(list(Polygons(list(Polygon(meuse.riv)), "meuse.riv"))) proj4string(meuse.river) <- CRS("+init=epsg:28992")
Now we convert to sf
meuse <- st_as_sf(meuse) meuse.grid <- st_as_sf(meuse.grid) meuse.river <- st_as_sf(meuse.river)
We can visualize the data set as a whole.
ggplot(meuse) + geom_sf(data = meuse.river) + geom_sf(aes(color = zinc, size = zinc))
There are many methods of spatially interpolating point data. A simple algorithm is inverse distance weighting where each interpolated value is a weighted sum of the values of its neighbors. The weight used is the inverse of the distance to all or a subset of the original points. To obtain an inverse distance interpolation, we use the krige
function from the gstat
package, but without specifying a fitted model.
zinc.nvd <- krige(zinc ~ 1, locations = meuse, newdata = meuse.grid) ggplot(zinc.nvd) + geom_sf(aes(color = var1.pred))
A variogram quantifies the autocorrelation of a set of points. The x-axis of a variogram is lag-distance between points. The y-axis gives the average squared difference between the z-values of points for a particular lag-distance. When the variogram is flat, there is no autocorrelation. When it rises steeply, there is autocorrelation. In geostatistics, we often fit a function to the variogram and then use that function to determine the contribution of neighbors to our interpolation. If the variogram flattens out at a very short distance, then we increase the contribution of nearby neighbors and the interpolation surface will be rough. If the variogram flattens out at a very long distance, we generate a more smooth interpolation of the point.
zinc.ev <- variogram(log(zinc)~1, meuse) zinc.fitted <- fit.variogram(zinc.ev, model=vgm(1, "Sph", 900, 1)) plot(zinc.ev, zinc.fitted)
Ordinary Kriging assumes that the data are stationary, i.e., there is no spatial trend in the mean z-value. This is indicated below by the model formula only containing an intercept term (~1
). A "Universal Kriging" model would contain formula terms related to the coordinates. This allows one to account for spatial trends in the mean value.
zinc.kriged <- krige(log(zinc)~1, meuse, meuse.grid, model=zinc.fitted) ggplot(zinc.kriged) + geom_sf(aes(color = var1.pred))
Interpolate the annual rainfall data in precip_30ynormals
layer.
Functions: st_read
, st_sample
, krige
, ggplot
, variogram
, fit.variogram
precip <- st_read(dsn = "ncshape", layer = "precip_30ynormals", quiet = TRUE) pts <- st_sample(nc_state, 5000, "regular") precip.nvd <- krige(annual ~ 1, precip, pts) ggplot(precip.nvd) + geom_sf(aes(color = var1.pred)) + labs(title = "Kriged Annual Precipitation Estimates") precip.ev <- variogram(annual ~ 1, precip) precip.fitted <- fit.variogram(precip.ev, model = vgm(1, "Sph", 50000, 1)) plot(precip.ev, precip.fitted) precip.kriged <- krige(annual ~ 1, precip, pts, model = precip.fitted) ggplot(precip.kriged) + geom_sf(aes(color = var1.pred)) + labs(title = "Kriged Annual Precipitation Estimates")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.