knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" )
Blodgett, D., Johnson, J.M., 2022, nhdplusTools: Tools for Accessing and Working with the NHDPlus, https://doi.org/10.5066/P97AS8JD
install.packages("nhdplusTools")
For the latest development:
install.packages("remotes") remotes::install_github("DOI-USGS/nhdplusTools")
For data discovery and access in a U.S. context, start with the Getting Started page.
Detailed documentation of all the package functions can be found at the Reference page.
nhdplusTools
, is designed to provide easy access to data associated with the U.S. National Hydrography Dataset. Many functions provided in nhdplusTools
are thin wrappers around functions that have been migrated to hydroloom
.
The nhdplusTools
package is intended to provide a reusable set of tools to
subset, relate data to, and generate network attributes for U.S. NHDPlus data.
General, globally applicable functionality has been moved to hydroloom
nhdplusTools
implements a data model consistent with both the NHDPlus
dataset and the HY_Features data
model. The package aims to provide a set of tools that can be used to build
workflows using NHDPlus data.
This vision is intended as a guide to contributors -- conveying what kinds of contributions are of interest to the package's long term vision. It is a reflection of the current thinking and is open to discussion and modification.
The following describe a vision for the functionality that should be included in the package in the long run.
The NHDPlus is a very large dataset both spatially and in terms of the number of attributes it contains. Subsetting utilities will provide network location discovery, network navigation, and data export utilities to generate spatial and attribute subsets of the NHDPlus dataset.
One of the most important roles of the NHDPlus is as a connecting network for
ancillary data and models. The first step in any workflow that uses the
network like this is indexing relevant data to the network. A number of methods
for indexing exist, they can be broken into two main categories: linear
referencing and catchment indexing. Both operate on features represented by
points, lines, and polygons. nhdplusTools
should eventually support both
linear and catchment indexing.
Given that nhdplusTools
is focused on working with NHDPlus data, the NHDPlus
data model will largely govern the data model the package is designed to work
with. That said, much of the package functionality also uses concepts from
the HY_Features standard.
Note: The HY_Features standard is based on the notion that a "catchment" is a holistic feature that can be "realized" (some might say modeled) in a number of ways. In other words, a catchment can only be characterized fully through a collection of different conceptual representations. In NHDPlus, the "catchment" feature is the polygon feature that describes the drainage divide around the hydrologic unit that contributes surface flow to a given NHD flowline. While this may seem like a significant difference, in reality, the NHDPlus COMID identifier lends itself very well to the HY_Features catchment concept. The COMID is used as an identifier for the catchment polygon, the flowline that connects the catchment inlet and outlet, and value added attributes that describe characteristics of the catchment's interior. In this way, the COMID identifier is actually an identifier for a collection of data that together fully describe an NHDPlus catchment. See the NHDPlus mapping to HY_Features in the HY_Features specification.
Below is a description of the scope of data used by the
nhdplusTools
package. While other data and attributes may come into scope,
it should only be done as a naive pass-through, as in data subsetting, or
with considerable deliberation.
Flowline geometry is a mix of 1-d streams and 1-d "artificial paths". In order to complete the set of features meant to represent water, we need to include waterbody polygons.
Catchment polygons are the result of a complete elevation derived hydrography process with hydro-enforcement applied with both Watershed Boundary Dataset Hydrologic Units and NHD reaches.
The NHDPlus includes numerous attributes that are built using the network and allow a wide array of capabilities that would require excessive iteration or sophisticated and complex graph-oriented data structures and algorithms.
The NHDPlus is a very large dataset. The architecture of this package as it relates to handling data and what dependencies are used will be very important.
nhdplusTools
offers a mix of web service and local data functionality.
Web services have generally been avoided for large processes. However,
applications that would require loading significant amounts of data to perform
something that can be accomplished with a web service very quickly are supported.
Systems like the Network Linked Data Index are
used for data discovery.
Initial package development focused on the National Seamless NHDPlus database. NHDPlus High Resolution is also supported.
https://github.com/mbtyers/riverdist
https://github.com/jsta/nhdR
https://github.com/lawinslow/hydrolinks
https://github.com/mikejohnson51/HydroData
https://github.com/ropensci/FedData
https://github.com/hyriver/pygeohydro
... others -- please suggest additions?
This package uses a convention to avoid building vignettes on CRAN. The BUILD_VIGNETTES
environment variable must be set to TRUE
. This is done with a .Renviron file in the package directory with the line BUILD_VIGNETTES=TRUE
.
Given this, the package should be built locally to include vignettes using:
devtools::build()
In addition to typical R package checking, a Dockerfile is included in this repository. Once built, it can be run with the following command.
docker build -t nhdplustools_test . docker run --rm -it -v $PWD:/src nhdplustools_test /bin/bash -c "cp -r /src/* /check/ && cp /src/.Rbuildignore /check/ && cd /check && Rscript -e 'devtools::build()' && R CMD check --as-cran ../nhdplusTools_*"
First, thanks for considering a contribution! I hope to make this package a community created resource for us all to gain from and won't be able to do that without your help!
1) Contributions should be thoroughly tested with testthat
.
2) Code style should attempt to follow the tidyverse
style guide.
3) Please attempt to describe what you want to do prior to contributing by submitting an issue.
4) Please follow the typical github fork - pull-request workflow.
5) Make sure you use roxygen and run Check before contributing. More on this front as the package matures.
Other notes:
- consider running lintr
prior to contributing.
- consider running goodpractice::gp()
on the package before contributing.
- consider running devtools::spell_check()
if you wrote documentation.
- this package uses pkgdown. Running pkgdown::build_site()
will refresh it.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.