The goal of {hakedataUSA} is to provide code to extract and workup the U.S. data for the assessment of Pacific Hake.
First, you must update data-raw/quotas.csv
to include the
sector-specific quotas. These values are used when processing the
data, mainly for the creation of figures. Then, from within R,
source data-raw/quotas.R
and the internal data object will be
updated and ready for use. Commit both data-raw/quotas.csv
and
data-quotas.rda
to the repository and push.
Next, load the package. This can be accomplished through GitHub (first chunk) or using a local clone (second chunk).
``` r chooseCRANmirror(ind = 1)
pak::pak("pacific-hake/hakedataUSA") library(hakedataUSA) ```
r
chooseCRANmirror(ind = 1)
stopifnot(basename(getwd()) == "hakedataUSA")
devtools::load_all()
The path to where all of the raw output will be saved is stored in
an internal function, i.e., hakedata_wd()
. Try it out, see if it
works for you. If it does not work, then you will need to alter the
function, which is stored in R/hakedata-R
. The function should
result in a path ending with data-tables
inside of your cloned
version of
pacific-hake/hake-assessment.
The remainder of the code will pull from the data bases and set up the input files.
pull_database()
process_database()
write_bridging(
dir_input = fs::path(dirname(hakedata_wd()), "models", "2022.01.10_base"),
dir_output = fs::path(dirname(hakedata_wd()), "models", "2023", "01-version", "02-bridging-models")
)
Please contact kelli.johnson@noaa.gov if there are issues with the code. Note that the databases will only be accessible to U.S. members of the JTC.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.