Introduction

Growing Shade is a prioritization tool for tree canopy enhancement and preservation. Growing Shade was conceptualized by the Metropolitan Council, Tree Trust, and The Nature Conservancy with a focus on the Twin Cities (Minnesota) area. All analyses and code were developed by Ellen Esch. The Twin Cities version of Growing Shade can be accessed at: www.growingshade.com or https://metrotransitmn.shinyapps.io/growing-shade/.

The components of Growing Shade relevant to population demographics (i.e., determining environmental justice priority areas) can be scaled to any area within the United States. The components relevant to the tree canopy have been parameterized to the Twin Cities region, but should work pretty well in other temperate areas with deciduous and coniferous trees.

This tutorial walks through how to grab and synthesize various data pieces which go into making Growing Shade. This document should be useful when doing data updates update, or trying to scale this workflow to other regions/areas.

The first time you run this code, it will be helpful to walk step-by-step through the various pieces. There are several places where you'll get instructions to request and save API keys, or other manual steps which can't be automated. However, after you walk thorough it initially, in future runs (if needed) or for small data/geography adjustments, you can probably be successful running everything in one go!

knitr::opts_chunk$set(
  echo = FALSE, message = TRUE, 
  warning = TRUE,
  cache = FALSE, progress = FALSE
)
library(dplyr)
library(tidyr)
library(readr)
library(stringr)
library(tibble)
library(ggplot2)
library(tigris)
library(sf)
library(tidycensus)
library(ggbeeswarm)
library(RSocrata)
library(here)


st_erase <- function(x, y) st_difference(x, st_union(st_combine(y)))
`%not_in%` <- Negate(`%in%`)

Set parameters

These global parameters dictate what state/county combinations will be used, as well as from what year various data pieces come from. Since the data pieces are updated on different schedules, it's unfortunately not as simple as setting a "global" rule to use data corresponding to the current year

This code is set up to use/process 2020-era census geographies at the block group level (even though some data sources are still using 2010-era geographies...but we'll deal with those later).

# Geography variables
state <- c("MN")
county <- c("Anoka", "Carver", "Dakota", "Hennepin", "Ramsey", "Scott", "Washington") # either type in specific county names, or for all counties within a state assign `county <- NULL`
# state <- c("WI")#c("MN")

# Demographic variables
acs_year <- 2021 # The 2017-2021 ACS 5-year estimates are scheduled to be released on December 8, 2022. So this can be updated with "2021" then.
census_year <- 2020

process_demographics <- TRUE
detailed_population_count <- TRUE # are there more accurate counts of total population and housing units available (for calculating density)? If so, bring those in
# Tree canopy and/or geography files
regional_specific_data <- TRUE # go to 02_geographies.Rmd and 03_remotesensing.Rmd scripts to process/make these files, as applicable

Process demographic data

The demographic information is fetched using the APIs/app tokens. Follow these instructions:

You will need an API key from Census:

You will need an app token from CDC:

Decennial census

Race variables come from the decennial 2020 census. Currently, the decennial census data is preferred over ACS data because it is a population count rather than a sample and because ACS data uses the average of 5 years (so starting with the ACS 2021-2025 data, it may be logical to switch over to using the ACS data instead).

if (process_demographics == TRUE) {
  if (detailed_population_count == TRUE) {
    # this more detailed population count data is not currently being incorporated, because better data doesn't exist at this point! The skeleton should be useful when better data does exist...
    temp <- tempfile()
    temp2 <- tempfile()
    download.file(
      "https://resources.gisdata.mn.gov/pub/gdrs/data/pub/us_mn_state_metc/society_small_area_estimates/xlsx_society_small_area_estimates.zip",
      destfile = temp
    )
    unzip(zipfile = temp, exdir = temp2)
    list.files(temp2)

    pop_count <- readxl::read_xlsx(paste0(
      temp2, pattern = "/SmallAreaEstimatesBlockGroup.xlsx")) %>%
      rename(
        population_count = POP_EST,
        housing_units = HU_EST
      )
    files <- list.files(temp2, full.names = TRUE)
    file.remove(files)
  }


  census_api_key(Sys.getenv("CENSUS_KEY"))

  # v2020 <- load_variables(census_year, "pl", cache = TRUE)
  # View(v2020)# %>% filter(geography == "block group"))


  decennial_var <- c(
    "P2_005N", # white nh
    "P2_006N", # black nh
    "P2_007N", # amin nh
    "P2_008N", # asian nh
    "P2_009N", # hawaiian pi nh
    "P2_010N", # some other nh
    "P2_011N", # 2+ nh
    "P2_002N", # hispanic
    "H1_001N" # total housing units
  )

  decennial_data <- get_decennial(
    geography = "block group",
    variables = decennial_var,
    summary_var = "P1_001N",
    year = census_year,
    state = state,
    county = county,
    geometry = FALSE
  ) %>%
    pivot_wider(names_from = variable, values_from = value) %>%
    # process data
    mutate(across(starts_with("P2"), ~ .x / summary_value)) %>%
    rename(
      whitenh = P2_005N,
      pblacknh = P2_006N,
      pamindnh = P2_007N,
      phisppop = P2_002N,
      housing_units = H1_001N
    ) %>%
    mutate(
      pothmultnh = (P2_010N + P2_011N),
      pasiannh = (P2_008N + P2_009N),
      pbipoc = 1 - whitenh,
      population_count = summary_value
    ) %>%
    select(-c(starts_with("P2"), summary_value)) %>%
    # get a nice name for tracts
    separate(NAME, into = c("bg", "tract", "county", "state"), sep = ",") %>%
    mutate(fancyname = paste0(county, " tract ",
                              str_remove(tract, "Census Tract "),
                              ", block group ", str_remove(bg, "Block Group "))) %>%
    select(-bg, -tract, -county, -state)


  decennial_metadata <- get_decennial(
    geography = "county",
    variables = decennial_var,
    summary_var = "P1_001N",
    year = census_year,
    state = state,
    county = county,
    geometry = FALSE
  ) %>%
    add_column(geo = "region averages") %>%
    group_by(geo, variable) %>%
    summarise(
      value = sum(value),
      summary_value = sum(summary_value)
    ) %>%
    pivot_wider(names_from = variable, values_from = value) %>%
    mutate(across(starts_with("P2"), ~ .x / summary_value)) %>%
    rename(
      whitenh = P2_005N,
      pblacknh = P2_006N,
      pamindnh = P2_007N,
      phisppop = P2_002N,
      housing_units = H1_001N
    ) %>%
    mutate(
      pothmultnh = (P2_010N + P2_011N),
      pasiannh = (P2_008N + P2_009N),
      pbipoc = 1 - whitenh,
      population_count = summary_value
    ) %>%
    select(-c(starts_with("P2"), summary_value, geo))

  save(decennial_data, decennial_metadata, file = paste0(here::here(), "/data-raw/decennial_data.rda"))
} else {
  load(paste0(here::here(), "/data-raw/decennial_data.rda"))
}

American Community Survey

Other demographic variables come from the American Community Survey 5-year data. ACS numbers come from a surveyed population sample.

Check what is the most recent year of ACS data here. Search for "acs5" and see what is the most recent year of data available.

Of course, more variables from ACS can be added. The easiest way is to add variables which exist at the block group level. If you're adding variables where tract-level data is the most detailed geography reported, follow the steps in the code below which processes the disability information. If more variables are being added from ACS, please be sure to update the metadata.csv file (this is outlined later in the "combine all variables" step).

if (process_demographics == TRUE) {
  census_api_key(Sys.getenv("CENSUS_KEY"))

  v20 <- load_variables(acs_year, "acs5", cache = TRUE)
  # View(v20 %>% filter(geography == "tract"))
  # View(v20 %>% filter(geography == "block group"))
  #  you may need to access the table shells: https://www.census.gov/programs-surveys/acs/technical-documentation/table-shells.html
  #  census reporter topics are also very useful! https://censusreporter.org/topics/

  acs_variables <- c(
    "B01001_001", # poptotal
    paste0("B01001_00", c(3:6)), # under18 m
    paste0("B01001_0", c(27:30)), # under18 f
    paste0("B01001_0", c(20:25, 44:49)), # over 65m, f
    "B19013_001", # median hh income
    "B25003_001", # tenure_total
    "B25003_002", # tenure owners
    "B23025_001", # employment status denominator
    "B23025_007", # unemployed
    paste0("C17002_00", c(1:6)) # poverty status
  )

  acs_fxn <- function(.geo, .process = NULL, .extra_vars = NULL) {
    step1 <- get_acs(
      geography = .geo,
      variables = if (is.null(.extra_vars)) {
        acs_variables
      } else {
        .extra_vars
      }, # c(acs_variables, .extra_vars),
      survey = "acs5",
      state = state,
      county = county,
      year = acs_year
    ) %>%
      select(-moe, -NAME) %>%
      pivot_wider(names_from = variable, values_from = estimate)

    # process to useable forms of the data
    if (is.null(.process)) {
      step2 <- step1 %>%
        rowwise() %>% 
        mutate(
          under18 = 
            sum(c(B01001_003, B01001_004, B01001_005,
                  B01001_006, B01001_027, 
                  B01001_028, B01001_029, B01001_030),
                na.rm = TRUE),
          over65 = sum(c(B01001_020, B01001_021, B01001_022, 
                         B01001_023, B01001_024, 
                         B01001_025, B01001_044, B01001_045,
                         B01001_046, B01001_047, 
                         B01001_048, B01001_049), na.rm = TRUE)) %>%
        mutate(across(c(under18, over65), ~ .x / B01001_001), # , .names = "{.col}_percent"),
               sens_age = under18 + over65,
               pownhome = B25003_002 / B25003_001,
               pwk_nowork = B23025_007 / B23025_001,
               ppov185 = (C17002_002 + C17002_003 + C17002_004 +
                            C17002_005 + C17002_006) / C17002_001
        ) %>%
        rename(hhincome = B19013_001) %>%
        select(c(GEOID, !starts_with(c("B", "C"))))
    }

    if (is.null(.process)) {
      return(step2)
    } else {
      return(step1)
    }
  }

  acs_blockgroups <- acs_fxn(.geo = "block group") %>%
    mutate(tract_id = substr(GEOID, start = 1, stop = 11))

  acs_disability <- acs_fxn(
    .geo = "tract",
    .process = "dont complete",
    .extra_vars = c("C18108_001", "C18108_005", "C18108_009", "C18108_013")
  ) %>%
    rowwise() %>% 
    mutate(nodisability = sum(c(C18108_005, C18108_009, C18108_013))) %>%
    transmute(
      tract_id = GEOID,
      pd_any = 1 - (nodisability / C18108_001)
    )

  acs_tracts <- acs_fxn(.geo = "tract") %>%
    # .extra_variables = (paste0("C18108_0", c("01", "05", "09", "13"))))
    # # %>% #disability status; only exists at tract level
    mutate(across(c(hhincome:ppov185), ~.x, .names = "tr_{.col}")) %>%
    rename(tract_id = GEOID) %>%
    select(starts_with("tr")) %>%
    full_join(acs_disability)

  ###
  # also want to get regional averages for metadata
  ###
  acs_county <- get_acs(
    geography = "county",
    variables = c(acs_variables,
                  paste0("C18108_0", c("01", "05", "09", "13"))),
    survey = "acs5",
    state = state,
    county = county,
    year = acs_year
  ) %>% 
    add_column(geo = "region averages")


  acs_metadata <- acs_county %>% 
    group_by(geo, variable) %>%
    summarise(estimate = sum(estimate)) %>%
    pivot_wider(names_from = variable, values_from = estimate) %>%
    ungroup() %>% 
    # process to useable forms of the data
    mutate(
      under18 = 
        sum(c(B01001_003, B01001_004, B01001_005,
              B01001_006, B01001_027, 
              B01001_028, B01001_029, B01001_030),
            na.rm = TRUE),
      over65 = sum(c(B01001_020, B01001_021, B01001_022, 
                     B01001_023, B01001_024, 
                     B01001_025, B01001_044, B01001_045,
                     B01001_046, B01001_047, 
                     B01001_048, B01001_049), na.rm = TRUE),
      nodisability = sum(c(C18108_005, C18108_009, C18108_013),
                         na.rm = TRUE)
    ) %>%
    mutate(across(c(under18, over65), ~ .x / B01001_001), 
           sens_age = under18 + over65,
           pownhome = B25003_002 / B25003_001,
           pwk_nowork = B23025_007 / B23025_001,
           ppov185 = (C17002_002 + C17002_003 + C17002_004 + C17002_005 + C17002_006) / C17002_001,
           pd_any = 1 - (nodisability / C18108_001)
    ) %>%
    select(c(geo, !starts_with(c("B", "C")), -nodisability)) %>% 
    left_join(
      # hhincomme is not summed
      acs_county %>% 
        filter(variable %in% c("B19013_001",
                               "B01001_001")) %>% 
        select(-moe) %>% 
        pivot_wider(names_from = variable, values_from = estimate) %>%
        group_by(geo) %>% 
        # hhincome is the mean county median hh income, 
        # weighted by population
        summarise(hhincome = weighted.mean(B19013_001, B01001_001)),
      by = "geo")



  ###
  # save files
  ###

  save(acs_blockgroups, acs_tracts, acs_metadata, file = paste0(here::here(), "/data-raw/acs_data.rda"))
} else {
  load(paste0(here::here(), "/data-raw/acs_data.rda"))
}

Essential geographic data

fips <- tigris::lookup_code(state = state) %>%
  stringr::str_split("'", simplify = TRUE)

crosswalk <- read_csv(paste0(here::here(), "/data-raw/nhgis_bg2020_tr2010_", fips[, 2], ".csv"),
                      col_types = c(
                        "tr2010ge" = "c",
                        "bg2020ge" = "c"
                      )
) %>%
  select(
    tr2010ge, # census geoid for 2010
    bg2020ge, # census geoid for 2020
    wt_pop
  ) %>% # use weighted population crosswalk
  group_by(bg2020ge) %>%
  slice(which.max(wt_pop)) %>% # just get the crosswalk for where most people live. it's oversimplified, but pretty decent esp since block groups generally nest nicely into tracts across census geographies
  select(-wt_pop)

PLACES health data

PLACES data was last updated August 25, 2023

Health metrics come from PLACES: Local Data for Better Health, Census Tract Data. I am not aware that there is a way to use a "health_year" parameter to get a specific timestamp of the data, so please just double check that the most recent data is being used! CDC has indicated that they might soon be switching over to using 2020-vintage census geographies rather than the old 2010 geographies, so the processing will get simplified at that point.

Also, because this CDC data currently uses old census geographies, you will need to download a crosswalk - Download the state-specific crosswalk of "Block Groups --> Census Tracts" for year "2010 --> 2020" from NHGIS - Place the resultant file in the data-raw folder

# CDC health data
# variable options are documented here: https://www.cdc.gov/places/measure-definitions/index.html

if (process_demographics == TRUE) {
  raw_health <- read.socrata(
    paste0("https://chronicdata.cdc.gov/resource/cwsq-ngmh.json?$where=stateabbr in('", state, "')"),
    app_token = Sys.getenv("CDC_KEY"),
    email = Sys.getenv("CDC_EMAIL"),
    password = Sys.getenv("CDC_PASSWORD")
  ) %>%
    # get the columns of interest. there are more variables as an fyi though!
    # names(bg_health)
    # levels(as.factor(bg_health$measure))
    filter(measure %in% c(
      "Current asthma among adults aged >=18 years",
      "Chronic obstructive pulmonary disease among adults aged >=18 years",
      "Mental health not good for >=14 days among adults aged >=18 years",
      "Physical health not good for >=14 days among adults aged >=18 years"
    )) %>%
    dplyr::select(
      # health_data_year = year,
      locationname, measureid, data_value) %>%
    mutate(data_value = as.numeric(data_value) / 100) %>% # change to fraction
    pivot_wider(names_from = measureid, values_from = data_value)


  ## translate health data into 2020 geographies
  health_data <- raw_health %>%
    full_join(crosswalk, by = c("locationname" = "tr2010ge")) %>%
    rename(GEOID = bg2020ge) %>%
    select(-locationname) %>% 
    unique()

  save(file = paste0(here::here(), "/data-raw/health_data.rda"), health_data)
} else {
  load(paste0(here::here(), "/data-raw/health_data.rda"))
}

Process geographies

Growing Shade is set up to have data at the block group, neighborhood, and city/township (ctu) levels.

Census block groups

Getting these geography files is easy - just downloaded directly.

bg_geo <- block_groups(
  state = state,
  county = county,
  year = census_year
)

Neighborhoods and city levels

Since we're going to be making a map which shows census tracts, cities, or neighborhoods depending on the user input, a crosswalk needs to be made which relates block groups to the city and neighborhood levels.

This section doesn't seem to be easily scalable to other regions unless zip codes are an appropriate stand-in for neighborhoods. As such, the code for this section will likely need to be customized, and processing will be done within the 02_geographies.Rmd script. Further, if this section doesn't apply for other regions, it should be easy enough to remove elements in the user-interface of the application.

In any case, the following files should exist in the data folder (again, the 02_geographies.Rmd script shows an example of creating these files for the Twin Cities region).

if (regional_specific_data == FALSE) {

  # cities
  county_outline <- if (is.null(county)) {
    tigris::counties(state = state)
  } else {
    tigris::counties(state = state) %>%
      filter(NAME %in% county)
  }

  cities <- tigris::county_subdivisions(
    state = state,
    county = county,
    class = "sf"
  ) %>%
    mutate(NAME = case_when(
      LSAD == 44 ~ paste(NAME, "Twp."),
      LSAD == 46 ~ paste(NAME, "(unorg.)"),
      TRUE ~ NAME
    )) %>%
    group_by(NAME) %>%
    mutate(n = n()) %>%
    left_join(st_drop_geometry(county_outline) %>%
                transmute(
                  COUNTYFP = COUNTYFP,
                  CONAME = NAME
                )) %>%
    mutate(NAME = case_when(
      n > 1 & LSAD != 25 ~ paste0(NAME, " - ", CONAME, " Co."), # cities dont get merged
      TRUE ~ NAME
    )) %>%
    group_by(NAME) %>%
    summarise() %>%
    # summarize(geometry = st_union(geom)) %>%
    arrange(NAME) %>%
    rename(GEO_NAME = NAME)

  ctu_crosswalk <- cities %>%
    select(GEO_NAME) %>%
    st_transform(26915) %>%
    st_buffer(-150) %>% # buffer the perimeter of the geography
    st_intersection(bg_geo %>%
                      dplyr::select(GEOID, ALAND, AWATER) %>%
                      st_transform(26915)) %>%
    # if less than 1% of block group falls within city boundary, don't assign it to the city
    mutate(
      approx_area = as.numeric(st_area(.)),
      percent = approx_area / (ALAND + AWATER)
    ) %>%
    filter(percent > .01) %>%
    st_drop_geometry() %>%
    rename(bg_id = GEOID)

  wide_ctu_crosswalk <- ctu_crosswalk %>%
    aggregate(GEO_NAME ~ bg_id, paste, collapse = ", ") %>%
    rename(jurisdiction = GEO_NAME)

  # neighorhoods / zip codes
  zips <- tigris::zctas(state = state, year = 2010)

  elligible_nhoods <- count(wide_ctu_crosswalk, jurisdiction) %>%
    filter(n > 100) %>% # cities must have >100 census blocks in order to have "neighborhoods"
    left_join(cities, by = c("jurisdiction" = "GEO_NAME")) %>%
    st_as_sf()

  zip_list <- zips %>%
    st_transform(26915) %>%
    st_intersection(elligible_nhoods %>%
                      st_transform(26915) %>%
                      st_buffer(-200)) %>% # give some breathing room to reduce overlaps
    st_drop_geometry()

  nhoods <- zips %>%
    filter(ZCTA5CE10 %in% zip_list$ZCTA5CE10) %>%
    left_join(zip_list) %>%
    transmute(
      GEO_NAME = ZCTA5CE10,
      city = jurisdiction
    )

  nhood_crosswalk <- nhoods %>%
    st_transform(26915) %>%
    st_buffer(-150) %>% # buffer the perimeter of the geography
    st_intersection(bg_geo %>%
                      dplyr::select(GEOID, ALAND, AWATER) %>%
                      st_transform(26915)) %>%
    # if less than 1% of block group falls within city boundary, don't assign it to the city
    mutate(
      approx_area = as.numeric(st_area(.)),
      percent = approx_area / (ALAND + AWATER)
    ) %>%
    filter(percent > .01) %>%
    st_drop_geometry() %>%
    transmute(
      GEO_NAME = GEO_NAME,
      city = city,
      bg_id = GEOID
    )

  usethis::use_data(ctu_crosswalk, overwrite = TRUE)
  usethis::use_data(nhood_crosswalk, overwrite = TRUE)
} else {
  county_outline <- if (is.null(county)) {
    tigris::counties(state = state)
  } else {
    tigris::counties(state = state) %>%
      filter(NAME %in% county)
  }

  load(paste0(here::here(), "/data-raw/geography_data.rda"))
  usethis::use_data(ctu_crosswalk, overwrite = TRUE)
  usethis::use_data(nhood_crosswalk, overwrite = TRUE)
}
if (regional_specific_data == FALSE) {
  cities_raw <- tigris::county_subdivisions(
    state = state,
    county = county,
    class = "sf"
  ) 

  cities <- st_drop_geometry(cities_raw) %>% 
    group_by(NAME) %>% # pick the biggest geography if there are city/town/township issues with the same name. 
    mutate(n = n(),
           maxaland = max(ALAND)) %>%
    arrange(NAME) %>%
    mutate(drop = if_else(n > 1 & maxaland != ALAND, "drop", "keep")) %>%
    filter(drop == "keep") %>%
    left_join(cities_raw) %>%
    st_as_sf()

  ctu_crosswalk <- cities %>% select(NAME) %>%
    st_transform(26915) %>%
    st_buffer(-150) %>% #buffer the perimeter of the geography
    st_intersection(bg_geo %>% 
                      dplyr::select(GEOID) %>%
                      st_transform(26915)) %>%
    st_drop_geometry() %>%
    rename(GEO_NAME = NAME,
           bg_id = GEOID)

  wide_ctu_crosswalk <- ctu_crosswalk %>%
    aggregate(GEO_NAME ~ bg_id, paste, collapse = ", ") %>%
    rename(jurisdiction = GEO_NAME)

  nhood_crosswalk <- ctu_crosswalk %>%
    mutate(city = "placeholder")

  usethis::use_data(ctu_crosswalk, overwrite = TRUE)
  usethis::use_data(nhood_crosswalk, overwrite = TRUE)

} else {
  load(paste0(here::here(), "/data-raw/geography_data.rda"))
  usethis::use_data(ctu_crosswalk, overwrite = TRUE)
  usethis::use_data(nhood_crosswalk, overwrite = TRUE)
}

Remote sensing data

This section may scale to other regions, but currently requires some initial processing steps done outside of R, and hence the code has been moved into the 03_remotesensing.Rmd script.

if (regional_specific_data == FALSE) {
  cli::cli_alert_warning("Generating random data")
  bg_canopy <- bg_geo %>%
    st_drop_geometry() %>%
    transmute(
      bg_id = GEOID,
      canopy_percent = sample(10:60, n(), replace = TRUE) / 100,
      avgcanopy = .3
    )

  bg_ndvi <- bg_geo %>%
    st_drop_geometry() %>%
    transmute(
      bg_id = GEOID,
      ndvi_uncultivated = sample(10:80, n(), replace = TRUE) / 100,
      ndvi_land = sample(10:80, n(), replace = TRUE) / 100
    )

  ctu_list_raw <- cities %>%
    transmute(
      GEO_NAME = GEO_NAME,
      canopy_percent = sample(10:60, n(), replace = TRUE) / 100,
      avgcanopy = .3
    ) %>%
    full_join(left_join(ctu_crosswalk, bg_canopy) %>%
                group_by(GEO_NAME) %>%
                summarise(
                  min = round(min(canopy_percent) * 100, 1),
                  max = round(max(canopy_percent) * 100, 1),
                  n_blockgroups = n()
                )) %>%
    st_as_sf() %>%
    arrange(GEO_NAME)

  nhood_list_raw <- nhoods %>%
    transmute(
      GEO_NAME = GEO_NAME,
      city = city,
      canopy_percent = sample(10:60, n(), replace = TRUE) / 100,
      avgcanopy = .3
    ) %>%
    full_join(left_join(nhood_crosswalk, bg_canopy) %>%
                group_by(GEO_NAME, city) %>%
                summarise(
                  min = round(min(canopy_percent) * 100, 1),
                  max = round(max(canopy_percent) * 100, 1),
                  n_blockgroups = n()
                )) %>%
    st_as_sf() %>%
    arrange(GEO_NAME)
} else {
  load(paste0(here::here(), "/data-raw/canopy_data.rda"))
}

Environmental data

Lifetime cancer risk from air toxins

The EPA's 2021 EJSCREEN data release can be downloaded automatically and filtered for the region of interest. Interestingly, the 2021 EJSCREEN data uses 2017 AirToxScreen data, even though there does appear to be a more recent, 2018 AirToxScreen file as of October 3, 2022.

It is my recommendation to use the EJSCREEN data because the process can be automated, and then just check back to see when this data gets updated for future years. Hopefully the 2018 AirToxScreen data will be incorporated soon.

UPDATE: 2023 EJSCREEN data was released in September 2023. Commented out sections note the more up to date datasets

if (process_demographics == TRUE) {
  temp <- tempfile()
  temp2 <- tempfile()
  download.file(
    "https://gaftp.epa.gov/EJScreen/2023/2.22_September_UseMe/EJSCREEN_2023_Tracts_with_AS_CNMI_GU_VI.csv.zip",
    # "https://gaftp.epa.gov/EJSCREEN/2021/EJSCREEN_2021_USPR_Tracts.csv.zip",
    destfile = temp
  )
  unzip(zipfile = temp, exdir = temp2)
  list.files(temp2)

  ejscreen <- read_csv(paste0(temp2, 
                              pattern = "/EJSCREEN_2023_Tracts_with_AS_CNMI_GU_VI.csv"),
                              # pattern = "/EJSCREEN_2021_USPR_Tracts.csv"),
                       col_select = c(ST_ABBREV, CANCER, ID)
  ) %>%
    filter(ST_ABBREV == state)

  # #ejscreen is still using 2010 geographies
  # ejscreen %>% left_join(bg_geo %>%
  #                          mutate(ID = substr(GEOID, start = 1, stop = 11))) %>%
  #   st_as_sf() %>%
  #   ggplot() +
  #   geom_sf(aes(fill = CANCER))

  files <- list.files(temp2, full.names = TRUE)
  file.remove(files)

  ## translate into 2020 geographies
  cancer_data <- ejscreen %>%
    mutate(tract_id = ID) %>% 
    left_join(bg_geo %>% 
                mutate(tract_id = stringr::str_sub(GEOID, end = 11))) %>% 
        select(GEOID, env_cancer = CANCER) %>% 
    filter(!is.na(env_cancer),
           !is.na(GEOID))

  save(file = paste0(here::here(), "/data-raw/cancer_data.rda"), cancer_data)
} else {
  load(paste0(here::here(), "/data-raw/cancer_data.rda"))
}

Climate vulnerability - temperature & flooding

This section relies on processing remote sensing data outside of R, and may require some specific processing to work for other regions. Here, the files created are as follow:

if (regional_specific_data == FALSE) {
  cli::cli_alert_warning("Generating random sample data")
  env_data <- bg_geo %>%
    st_drop_geometry() %>%
    transmute(
      bg_id = GEOID,
      avg_temp = sample(85:110, n(), replace = TRUE),
      prim_flood = sample(0:100, n(), replace = TRUE) / 100
    )
} else {

  # raw_bg_heat20 <- read_csv("https://raw.githubusercontent.com/Metropolitan-Council/extreme.heat/main/r_postprocessing/data-raw/ExtremeHeat2022_blockgroups2020.csv") %>%
  #   select(GEOID,mean_LST) %>%
  #   mutate(mean_numeric = gsub("[^0-9.-]", "", mean_LST) %>% as.numeric())


  load("data-raw/heat.rda")
  # https://github.com/Metropolitan-Council/extreme.heat/blob/main/r_postprocessing/R/heat.R
  env_data <- readxl::read_xlsx(paste0(here::here(), "/data-raw/CLIMATE_BG20.xlsx")) %>%
    transmute(
      bg_id = BG20,
      avg_temp = AVG_TEMP,
      prim_flood = PRIM_FLOOD
    ) %>% 
    full_join(heat,
              by = c("bg_id" = "bg20")) %>% 
    mutate(avg_temp = heat2022) %>% 
    select(-heat2022)

  # flood data has not changed
  # 
}

Other & geographic overlay files

This may not be relevant for other regions, but here is the code.

Redlining

This data describes the amount of block groups which were redlined.

if (regional_specific_data == FALSE) {
  holc_data <- bg_geo %>%
    st_drop_geometry() %>%
    transmute(
      bg_id = GEOID,
      holc_pred = 0
    )
} else {
  holc_data <- readxl::read_xlsx(paste0(here::here(), "/data-raw/HOLC_BG20.xlsx")) %>%
    transmute(
      bg_id = BG20,
      holc_pred = HOLC_4_RED
    ) %>%
    right_join(bg_geo %>% st_drop_geometry() %>% transmute(bg_id = GEOID)) %>%
    mutate(holc_pred = if_else(is.na(holc_pred), 0, holc_pred))
}
# Remove open water. "all non residential"

## holc  ---------------
# ftp://ftp.gisdata.mn.gov/pub/gdrs/data/pub/us_mn_state_metc/plan_historic_holc_appraisal/gpkg_plan_historic_holc_appraisal.zip
temp <- tempfile()
download.file("https://resources.gisdata.mn.gov/pub/gdrs/data/pub/us_mn_state_metc/plan_historic_holc_appraisal/gpkg_plan_historic_holc_appraisal.zip",
              destfile = temp
)

redline_raw <- sf::read_sf(unzip(temp, "plan_historic_holc_appraisal.gpkg")) %>%
  filter(HSG_SCALE == "Hazardous") %>%
  sf::st_union() %>%
  sf::st_transform(4326)

fs::file_delete("plan_historic_holc_appraisal.gpkg")

MPCA area of environmental justice concern

This data was used initially, but it seems as if MPCA has stopped updating the data, thus it no longer seems relevant.

# nothing here yet; this seems to be out-of-date...but it could come from equity considerations data

Create final data pieces

These steps outline how to finally create the final files which are needed to run the application. All of these files will go into the data folder.

Merge data variables

Merge demographic and environmental data. Additionally, standardize and re-scale variables so we can create equally weighted priority scores. Do not include spatial data at this point, it should be joined after summarizing to save computational time.

Additionally, please check (and double check if problems arise) the data-raw/metadata.csv file. If you wish to exclude variables (which have been processed in steps above), simply remove the variable row from the metadata file. If you have added variables in the steps above, please make sure that those new variables appear in the metadata (or else the variables still will not appear in the final data).

nrow(decennial_data)
nrow(bg_canopy)
nrow(health_data)
nrow(cancer_data)
nrow(acs_blockgroups)
nrow(acs_disability)
code_metadata <- read_csv(paste0(here::here(), "/data-raw/metadata.csv"))

bg_growingshade_main <-
  # direct-fetching data; exists for all regions/scale-able
  acs_blockgroups %>% 
  ungroup() %>% 
  full_join(decennial_data) %>%
  full_join(health_data) %>%
  full_join(cancer_data) %>%
  full_join(bg_geo %>% st_drop_geometry() %>% transmute(GEOID = GEOID, land = ALAND)) %>%
  full_join(acs_tracts) %>%
  ungroup() %>% 
  # deal with some non-residential blocks and insufficient data; demographics specific data
  mutate(
    across(-c(GEOID, fancyname, tract_id, land, starts_with("tr"), 
              population_count),
           ~ if_else(population_count == 0, NA_real_, .x)),
    under18 = if_else(is.nan(under18), tr_under18, under18),
    over65 = if_else(is.nan(over65), tr_over65, over65),
    sens_age = if_else(is.nan(sens_age), tr_sens_age, sens_age),
    pownhome = if_else(is.nan(pownhome), tr_sens_age, pownhome),
    ppov185 = if_else(is.nan(ppov185), tr_sens_age, ppov185),
    hhincome = if_else(is.na(hhincome) & population_count > 0, tr_hhincome, hhincome),
    pwk_nowork = if_else(is.na(pwk_nowork) & population_count > 0, tr_pwk_nowork, pwk_nowork)
  ) %>%
  select(!starts_with("tr")) %>%
  filter(!is.na(fancyname)) %>%
  # tree canopy data; needs to be prepared separately for different regions
  full_join(bg_canopy, by = c("GEOID" = "bg_id")) %>%
  full_join(bg_ndvi, by = c("GEOID" = "bg_id")) %>%
  # other data; may not apply for other regions
  full_join(env_data, by = c("GEOID" = "bg_id")) %>% # extreme heat, flood risk
  full_join(holc_data, by = c("GEOID" = "bg_id")) %>% # redlining
  # start to process the data
  ungroup() %>% 
  mutate(
    inverse_ndvi_uncultivated = ndvi_uncultivated,
    inverse_ndvi_land = ndvi_land,
    inverse_canopy = canopy_percent,
    pop_density = (population_count / land) * 4045.86,
    housing_density = (housing_units / land) * 4045.86
  ) %>%
  rename(bg_string = GEOID) %>%
  # standardize and rescale
  pivot_longer(names_to = "variable",
               values_to = "raw_value",
               -c(bg_string, fancyname)) %>% # end the code after this line if you just want the reshaped data
  # only group by VARIABLE, not bg_id
  group_by(variable) %>%
  mutate(
    MEAN = mean(raw_value, na.rm = TRUE),
    SD = sd(raw_value, na.rm = TRUE),
    MIN = min(raw_value, na.rm = TRUE),
    MAX = max(raw_value, na.rm = TRUE),
    COUNT = as.numeric(sum(!is.na(raw_value))),
    z_score = if_else(SD != 0, (raw_value - MEAN) / SD, 0)
  ) %>%
  full_join(code_metadata, by = "variable") %>%
  # create nominal weights
  mutate(
    weights_nominal = case_when(
      interpret_high_value == "high_opportunity" ~ (raw_value - MIN) / (MAX - MIN) * 10,
      interpret_high_value == "low_opportunity" ~ 10 - (raw_value - MIN) / (MAX - MIN) * 10,
      TRUE ~ NA_real_
    )) %>%
  # Weights Standard Score
  mutate(
    weights_scaled = case_when(
      interpret_high_value == "high_opportunity" ~ pnorm(z_score) * 10,
      interpret_high_value == "low_opportunity" ~ (10 - pnorm(z_score) * 10),
      TRUE ~ NA_real_
    )) %>%
  # weights rank
  mutate(
    weights_rank = case_when(
      interpret_high_value == "high_opportunity" ~ min_rank(desc(weights_nominal)) / COUNT * 10,
      interpret_high_value == "low_opportunity" ~ min_rank(desc(weights_nominal)) / COUNT * 10,
      TRUE ~ NA_real_
    )) %>%
  # rank
  mutate(
    overall_rank = case_when(
      interpret_high_value == "high_opportunity" ~ min_rank(desc(as.numeric(weights_nominal))),
      interpret_high_value == "low_opportunity" ~ min_rank(desc(as.numeric(weights_nominal)))
    )) %>%
  # clean
  dplyr::select(-MEAN, -SD, -MIN, -MAX) %>%
  full_join(wide_ctu_crosswalk, by = c("bg_string" = "bg_id")) %>%
  filter(!is.na(name)) %>%  # remove variables which are NOT in the metadata list
  unique()

Create regional averages

For comparative purposes, regional averages need to be computed. While the average values for the block groups are needed to create z-scores in the scaling and standardizing steps, those averages are not the regional averages. For instance, the average median income of block groups might be $60,000 but the regional average might be lower if there are more people living in areas with lower income.

The regional average data mostly comes from data which has already been fetched from the census (either decennial or ACS). In other cases, you may wish to supplement with regional averages for other data pieces.

demo_metadata <- acs_metadata %>%
  full_join(decennial_metadata) %>%
  pivot_longer(names_to = "variable", 
               values_to = "MEANRAW2", -geo)

bg_averages <- bg_growingshade_main %>%
  group_by(variable) %>%
  summarise(
    MEANRAW = mean(raw_value, na.rm = TRUE),
    MEANSCALED = mean(weights_scaled, na.rm = TRUE)
  ) %>% 
  ungroup()

canopy_avg <-
  tribble(
    ~variable, ~MEANRAW2,
    "canopy_percent", bg_canopy$avgcanopy[1],
    "inverse_canopy", bg_canopy$avgcanopy[1]
  )

metadata <- bg_growingshade_main %>%
  dplyr::group_by(type, name, variable, interpret_high_value, climate_change, 
                  environmental_justice, public_health, conservation) %>%
  dplyr::count() %>%
  dplyr::ungroup() %>%
  full_join(bg_averages) %>%
  mutate(
    niceinterp =
      case_when(
        interpret_high_value == "high_opportunity" ~ "Higher",
        TRUE ~ "Lower"
      ),
    nicer_interp = case_when(
      niceinterp == "Lower" ~ "Lower values = higher priority",
      variable == "inverse_ndvi_uncultivated" ~ "Higher values = higher priority",
      variable == "inverse_ndvi_land" ~ "Higher values = higher priority",
      variable == "canopy_percent2" ~ "Higher values = higher priority",
      TRUE ~ ""
    )
  ) %>%
  full_join(demo_metadata %>%
              bind_rows(canopy_avg)) %>%
  mutate(MEANRAW = ifelse(!is.na(MEANRAW2), MEANRAW2, MEANRAW)) %>%
  dplyr::select(-MEANRAW2, -geo) %>%
  filter(!is.na(name))

usethis::use_data(metadata, overwrite = TRUE)

Highest priority

Inside the final data base, the highest priority (among public health, environmental justice, climate change, and conservation priorities) in each block group is identified.

highest_p <- function(group_var) {
  selectedvars <- metadata %>%
    filter(!!enquo(group_var) == 1) %>%
    .[, 2]
  bg_growingshade_main %>%
    filter(name %in% selectedvars$name) %>%
    group_by(bg_string) %>%
    summarise(MEAN = mean(weights_scaled, na.rm = FALSE)) # set to false so that public health or ej doesnt get calculated if it is a nonres area
}

priority_summary_1 <- highest_p(public_health) %>%
  rename(`Public health` = MEAN) %>%
  full_join(highest_p(conservation) %>% rename(Conservation = MEAN)) %>%
  full_join(highest_p(environmental_justice) %>% rename(`Environmental justice` = MEAN)) %>%
  full_join(highest_p(climate_change) %>% rename(`Climate change` = MEAN)) %>%
  pivot_longer(names_to = "preset", values_to = "score", -bg_string)

priority_summary <- priority_summary_1 %>%
  group_by(bg_string) %>%
  summarise(score = max(score, na.rm = TRUE)) %>%
  left_join(priority_summary_1) %>%
  rename(highest_priority = preset) %>%
  rename(GEOID = bg_string)

Core mapping data

The block group geographies need some final adjustments before they are ready to go into the mapping application.

mn_bgs_raw <- bg_geo %>%
  right_join(bg_growingshade_main %>% 
               ungroup() %>% 
               select(bg_string, fancyname) %>% 
               unique(),
             by = c("GEOID" = "bg_string")) %>%
  right_join(wide_ctu_crosswalk, by = c("GEOID" = "bg_id")) %>%
  full_join(bg_canopy, by = c("GEOID" = "bg_id")) %>%
  full_join(priority_summary) %>%
  full_join(priority_summary_1 %>%
              group_by(preset) %>%
              pivot_wider(names_from = preset, values_from = score),
            by = c("GEOID" = "bg_string")
  ) %>%
  ungroup() %>% 
  mutate(avgcanopy = mean(canopy_percent, na.rm = TRUE)) %>%
  dplyr::select(-c(STATEFP, COUNTYFP, TRACTCE, BLKGRPCE, NAMELSAD,
                   MTFCC, FUNCSTAT, INTPTLAT, INTPTLON)) %>%
  sf::st_as_sf() %>%
  sf::st_transform(4326) %>%
  filter(!is.na(highest_priority)) %>%
  rename(GEO_NAME = GEOID) %>% 
  unique()

region_outline <- if (is.null(county)) {
  county_outline %>%
    group_by(STATEFP) %>%
    summarise(geometry = sf::st_union(geometry)) %>%
    sf::st_simplify(dTolerance = 400) %>%
    sf::st_transform(4326)
} else {
  county_outline %>%
    group_by(COUNTYFP) %>%
    summarise(geometry = sf::st_union(geometry)) %>%
    sf::st_simplify(dTolerance = 400) %>%
    sf::st_transform(4326)
}

usethis::use_data(region_outline, overwrite = TRUE)

Simplify data for speed

Because all of the files inside the data folder need to be pushed up to the cloud in order to deploy the full version of Growing Shade, it is important to make the files as small as possible. This is very important to help loading speed of the application. In essence, most geographic files (polygons) can be simplified to some degree. All data files should only have the necessary columns for running the application, and extraneous columns should be removed.

speed_up <- function(x, smooth) {
  x %>%
    sf::st_transform(26915) %>%
    sf::st_simplify(dTolerance = smooth, preserveTopology = TRUE) %>%
    sf::st_transform(4326)
}

ctu_list <- ctu_list_raw %>%
  st_transform(4326) %>% 
  speed_up(50)
usethis::use_data(ctu_list, overwrite = TRUE)

nhood_list <- nhood_list_raw %>%
  speed_up(50)
usethis::use_data(nhood_list, overwrite = TRUE)

redline <- redline_raw %>%
  speed_up(50)
usethis::use_data(redline, overwrite = TRUE)

mn_bgs <- mn_bgs_raw %>%
  speed_up(40) %>% # 25
  filter(!is.na(GEO_NAME)) %>%
  mutate(GEOID = GEO_NAME)

usethis::use_data(mn_bgs, overwrite = TRUE)

object.size(mn_bgs_raw) / 1e5 ; object.size(mn_bgs) / 1e5

bg_growingshade_main <- bg_growingshade_main %>%
  dplyr::select(bg_string, name, weights_scaled, raw_value) %>%
  filter(!is.na(bg_string)) %>% 
  arrange(bg_string) %>% 
  ungroup() %>% 
  select(variable, bg_string, name, weights_scaled, raw_value)

usethis::use_data(bg_growingshade_main, overwrite = TRUE)

Running application

If the follow steps have been followed, you should be able to navigate to the ./dev/run_dev.R file, select all the code, and run! This should initiate a local version of Growing Shade for the customized region!

Set user interface parameters

Creating a ui_params.rda data set is the easiest solutions that I can (currently!) think of which allows for site-specific parameters to be set outside of the main application code. For instance, other regions are unlikely to have a city called "Oakdale," but regardless a starting city needs to be identified in order for the map to render correctly.

map_centroid <- region_outline %>%
  sf::st_union() %>%
  sf::st_centroid()

ui_params <- tribble(
  ~param, ~set, ~number,
  "cityselected", if (nrow(ctu_list) >= 129) {
    ctu_list[[129, 1]]
  } else {
    ctu_list %>%
      arrange(-n_blockgroups) %>%
      .[[1, 1]]
  }, NA_real_, # "Oakdale", NA_real_,
  "nhoodselected", "Frogtown", NA_real_,
  "center_latitude", NA_character_, st_coordinates(map_centroid)[2], # 44.963,
  "center_longitude", NA_character_, st_coordinates(map_centroid)[1], #-93.32,
  "center_zoom", NA_character_, 10,
  "tree_tile_location", 
  "https://metropolitan-council.github.io/treeraster-2021/GrowingShadeTealTrees_2021_toCloud/{z}/{x}/{y}", 
  NA_real_
)

usethis::use_data(ui_params, overwrite = TRUE)

# nhood_ui <- list(Minneapolis = nhood_list$GEO_NAME[nhood_list$city == "Minneapolis"],
#           `St. Paul` = nhood_list$GEO_NAME[nhood_list$city == "St. Paul"])
gs <- st_drop_geometry(nhood_list) %>%
  select(city, GEO_NAME) %>%
  group_by(city)
nhood_ui <- group_split(gs, city, keep = FALSE) %>% purrr::flatten() # set_names(group_keys(gs))
names(nhood_ui) <- c(group_keys(gs))[[1]]

usethis::use_data(nhood_ui, overwrite = TRUE)

Run application!

Please note that this application should run even if no region-specific data is added; it will just get filled in with random data. So please make sure that all random data is replaced BEFORE production.

# options(
#   shiny.launch.browser = TRUE,
#   scipen = 9999,
#   warn = -1,
#   verbose = FALSE,
#   golem.app.prod = FALSE
# ) # TRUE = production mode, FALSE = development mode
# 
# # Detach all loaded packages and clean your environment
# golem::detach_all_attached()
# 
# # Document and reload your package
# golem::document_and_reload()
# planting.shade::render_guides()
# 
# # Run the application
# run_app()

Troubleshooting application

Edit and add any region-specific language

There will likely be region-specific information that should be displayed alongside the data within the interactive application. Users should edit files within the "R" folder as prudent.

Using ArcGIS for processing

If platforms other than R are used for processing data, there shouldn't be any problem! However, you may need to replace sections of the R processing code with code that reads in .shp files and saves them as .rda files. You'll also need to just ensure that the spatial projection is correct for the leaflet package, which is used to map everything. I would recommend placing the raw shapefiles inside the data-raw folder. Here is an example of some code:

# test <- sf::read_sf(paste0(here::here(), "/data-raw/your_shapefile_name_here.shp")) %>%
#   sf::st_transform(4326)
# usethis::use_data(test, overwrite = TRUE)
write.csv(bg_growingshade_main %>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/bg_growing_shade_main.csv",
          row.names = FALSE)

write.csv(ctu_crosswalk, "data-raw/csv_copies/ctu_crosswalk.csv",
          row.names = FALSE)
write.csv(ctu_list %>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/ctu_list.csv",
          row.names = FALSE)
write.csv(metadata, "data-raw/csv_copies/metadata.csv",
          row.names = FALSE)

write.csv(mn_bgs %>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/mn_bgs.csv",
          row.names = FALSE)

write.csv(nhood_crosswalk%>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/nhood_crosswalk.csv",
          row.names = FALSE)

write.csv(nhood_list%>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/nhood_list.csv",
          row.names = FALSE)
# write.csv(nhood_ui, "data-raw/csv_copies/nhood_ui.csv",
#           row.names = FALSE)
write.csv(redline%>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/redline.csv",
          row.names = FALSE)
write.csv(region_outline %>% 
            sf::st_drop_geometry(), "data-raw/csv_copies/region_outline.csv",
          row.names = FALSE)


Metropolitan-Council/planting.shade documentation built on Feb. 25, 2024, 3:15 a.m.