DATA SOURCES {#app:data}

Commercial and research catch, effort, and biological data for groundfish are archived by the Groundfish Data Unit (Fisheries and Oceans Canada, Science Branch, Pacific Region) and housed in a number of relational databases [@databases2019]. Historical commercial catch and effort data from 1954--2006/2007 are housed in GFCatch, PacHarvest, PacHarvHL, and PacHarvSable, depending on the fishery and time period. Modern (2006/2007 to present) commercial catch data are housed in GFFOS, a groundfish-specific view of the Fishery Operations System (FOS) database (Fisheries and Oceans Canada, Fisheries and Aquaculture Management, Pacific Region). Research survey data and commercial biological data from the 1940s to present are housed in GFBio, the Groundfish Biological Samples database. Additional historical commercial sales slips records may exist from the Halibut, Sablefish and Dogfish-Lingcod fisheries in the PacHarv3 database. These additional data require more detailed analysis for inclusion in catch reconstructions and are not included in this report.

GF_MERGED_CATCH FOR COMMERCIAL CATCH AND EFFORT DATA

Commercial catch and effort data for the synopsis report and gfplot functions are sourced from the table GF_MERGED_CATCH in the GFFOS database. In each commercial database there is an official catch table that provides the best available estimate of landed catch per location by applying the proportion of catch per set or area to trip-level landing data. Since 2015, the official catch tables from the various databases have been merged together into GF_MERGED_CATCH to facilitate and standardize commercial data extraction.

Catch proportions are calculated from the most spatially detailed information available on how much of each species was harvested per set or per area. In most cases this will be catch reported in observer logs or fisher logs. Older data contain records where set-level information was rolled up, for example, by area (see @rutherford1999 for details on how catch was recorded in databases). The proportions are applied to the best available information on how much of each species was harvested on a trip. In most cases this will be the landed weight as recorded by the Dockside Monitoring Program (DMP). Earlier harvest data are recorded from sales slips or observer or fisher logs (see @rutherford1999 for details on data sources).

Below are details of how the official catch tables are created in each of the databases populating GF_MERGED_CATCH.

GFCatch 1954--1995 (TRAWL AND TRAP)

Catch data are extracted by trip and separated by retained weights (recorded on sales slips/landing records) or discarded weights (no counts) using utilization codes in the view vw_Total_Catch. The landings and discards are combined with trip, event, area and vessel tables to present the catches with associated details: trip ID, fishing event ID, sector, gear, vessel, best date (trip end date), best depth (in preferential order: average depth, minimum depth, maximum depth), species, area, and latitude and longitude (from start if available, otherwise from end). Set (trawl tow or trap line) proportions of total landings are not calculated as most older data do not include set-level information from observer or fisher logs (data are rolled up by area). When source = 1 (trawl trip report) or 2 (trawl sales slip or landing record only), then the gear type is set to trawl. When source = 5 (trap trip report) or 6 (trap sales slip or landing record only), the gear type is set to trap.

PacHarvest 1996--2007 (TRAWL ONLY)

In the PacHarvest database, retained catch weights are extracted from recorded on-board observer logs by hail-in number (usually representing a trip) and by set (trawl tow) to calculate the set proportion of total trip/hail-in catch. This proportion derived from the observer log data is then applied to the hail-in catch weights in dockside records to obtain a more accurate landed weight per set. These landings as well as retained weights from fisher logs are combined with trip level (as for GFCatch above) and set-level details to create the D_Official_Catch table for PacHarvest.

PacHarvSable 1996--2006 (TRAP AND HOOK AND LINE)

In PacHarvSable, the table D_Merged_Catches combines unique set numbers for each hail-in number with retained and discarded weights from fisher logs and landed weights from sales slips or dockside validation records. These catch data are combined with trip- and set-level data in the D_Official_Catch table with landed weight presenting landings or, if landings are not available, retained weights.

PacHarvHL 1985--2006 (HOOK AND LINE ONLY)

Catch data in PacHarvHL are combined with trip- and set-level data. Catch is recorded as the best of landed weight (sales slips or DMP records) or retained weight (fisher logs) and the source (either landed or retained) is indicated in the source column. Latitude and longitude records correspond to the beginning of a fishing event. The best available depth for the fishing event is given for each fishing event as, in preferential order, the average of the start and end depths, average of the minimum and maximum depths, start depth, end depth, minimum depth, or maximum depth. The best available date is given as, in preferential order, the fishing event end date, the fishing event start date, or the trip end date.

GFFOS 2007--PRESENT (TRAWL, TRAP AND HOOK AND LINE)

To create the official catch table in GFFOS, first the average weight per piece (individual fish) is calculated for each species by trip for later populating catch weight where only catch count is available. DMP landings are extracted by trip. Catch is extracted from observer and fisher log data by trip and separated by released/retained, legal/sublegal, liced, and bait using utilization codes. Average kg per piece is calculated by species from DMP data as ROUND_KG_PER_OFFLOAD_PIECE = OFFLOAD WT/OFFLOAD CT. When this is not available for a trip, ROUND_KG_PER_RETAINED_PIECE is calculated by species from log data. If this too is not available, kg per piece by species is calculated for all trips = EST_KG_PER_PIECE. and fisher log ‘retained catch’ data are extracted by fishing event. If there is no retained weight recorded but there is a retained count, then BEST_RETAINED_WT is calculated as the retained count multiplied by the best available average kg per piece from, in preferential order, ROUND_KG_PER_OFFLOAD_PIECE, ROUND_KG_PER_RETAINED_PIECE, EST_ROUND_KG_PER_PIECE. Similarly, if there is no retained count recorded but there is a retained weight, then the retained weight is divided by the best available average weight per piece to give BEST_RETAINED_COUNT. Trip totals are then calculated for landed weight, retained catch weight, landed count, and retained catch count, and ratios are calculated for trip landed weight:trip retained catch weight and trip landed count:trip retained catch count.

All best retained, landed and discarded catch weights and counts are combined in one view, GF_D_MERGED_FE_CATCH2_SUMRY_VW. Where LANDED_ROUND_KG is NULL but retained weight is reported and a landed weight:retained catch weight ratio exists then landed weight is given as BEST_RETAINED_ROUND_KG $\times$ MTFEC.KG_RATIO. Similarly, if no landed count is reported, then LANDED_COUNT is given as BEST_RETAINED_COUNT $\times$ MTFEC.COUNT_RATIO.

All catch and landings weight and count data by fishing event are then joined with several other pieces of data by fishing event including vessel ID and name, data source, fishery sector, and area. Several fields present “best” data when there are multiple options. BEST_DATE is the offload date when there are fewer than 3 months difference between offload date and best available logbook date (in preferential order of fishing event best, end or start date, or trip best, end or start date); otherwise, it is the best available logbook date. LATITUDE and LONGITUDE are, in preferential order, the reported start latitude/longitude, mid-latitude/longitude or end latitude/longitude. BEST_DEPTH is calculated as the average of start and end depth and converted from fathoms to meters. These data are combined in the view GF_D_OFFICIAL_FE_CATCH_VW2 with the additional fishing event or trip data generally obtained from observer logs, or from fisher logs when observer or validation logs are not available.

The official catch tables populate the GF_MERGED_CATCH table directly. Where there are duplicate records for a fishing event in GFFOS and either PacHarvHL or PacHarvSable, records from GFFOS are not incorporated into GF_MERGED_CATCH.

DATA EXTRACTION DETAILS

We developed a package gfplot for the statistical software R [@r2018] to automate data extraction from these databases in a consistent, reproducible manner. The functions extract data using SQL queries, developed with support from the Groundfish Data Unit, which select and filter for specific data depending on the purpose of the analysis. The SQL file names mentioned in this section can be viewed on GitHub and will be archived on a local server with the final version of this document.

COMMERCIAL CATCH DATA EXTRACTION

We extracted commercial catch with get-catch.sql. All landings and discards are extracted by species, fishery sector, gear type and year, and are not filtered in any further way.

We extracted commercial trawl catch and effort data (for later standardization) using get-cpue-index.sql and we filtered the data to include only records with valid start and end dates (Table \@ref(tab:sql-cpue-index)) which include set start and end time and are later used to calculate effort (expressed in hours). Catch (kg), year, gear type and Pacific Fishery Management Area (PFMA) are extracted for each tow. Gear type, PFMA and minimum year are given as arguments and are set at defaults of bottom trawl, all areas, and 1996, respectively.

Data were not filtered by success of tows, which is recorded in the database as undefined success, checked but unknown success, fully useable, malfunction/damage, lost gear, or water haul. This could be incorporated in future versions of the report.

\begin{table}[htpb] \centering \caption{Description of filters in SQL queries extracting commercial trawl catch and effort data from \texttt{GFFOS.GF_MERGED_CATCH} with \texttt{get-cpue-index.sql}} \label{tab:sql-cpue-index} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filtered for \texttt{END_DATE} \texttt{IS NOT NULL} AND {START_DATE} \texttt{IS NOT NULL} & To remove records with missing dates \ Filtered for \texttt{YEAR(FE_START_DATE)} = \texttt{YEAR(FE_END_DATE)} and \texttt{FE_END_DATE} > \texttt{FE_START_DATE} & To remove records with erroneous dates \ \bottomrule \end{tabu}} \end{table}

\vspace{0mm}

We extracted commercial trawl spatial CPUE data using get-cpue-spatial.sql, pulling out latitude, longitude, gear type, catch (kg) and CPUE (total catch/ effort in kg/hour) for every tow by species. The data are filtered to extract only records with valid start and end dates, to remove records with erroneous latitude and longitude values, and to include only records from the groundfish trawl sector with positive tows since 2013 following the implementation of the trawl footprint in 2012 (Table \@ref(tab:sql-cpue-spatial)).

\begin{table}[htpb] \centering \caption{Description of filters in SQL queries extracting commercial trawl spatial catch per unit effort (kg/hr) from \texttt{GFFOS.GF_D_OFFICIAL_CATCH} with \texttt{get-cpue-spatial.sql}} \label{tab:sql-cpue-spatial} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filtered for \texttt{LAT} between 47.8 and 55 and \texttt{LON} between -135 and -122 & To remove erroneous location records \ Filtered for \texttt{YEAR(BEST_DATE)} greater than 2012 & To extract only records since the trawl fishery footprint was established \ Filtered for \texttt{YEAR(START_DATE)} = \texttt{YEAR(END_DATE)} and \texttt{END_DATE} > \texttt{START_DATE} & To remove records with erroneous dates \ Filtered for \texttt{FISHERY_SECTOR} = \texttt{GROUNDFISH TRAWL} & To extract only records in the groundfish trawl fishery \ Filtered for \texttt{ISNULL(LANDED_ROUND_KG,0) + ISNULL(TOTAL_RELEASED_ROUND_KG,0)} > 0 & To extract only records with positive catch \ \bottomrule \end{tabu}} \end{table}

\vspace{0mm}

We extracted commercial hook and line spatial CPUE data using get-cpue-spatial-ll.sql, which pulls out latitude and longitude, gear type, catch (pieces) and years for all fishing events (sets, as a unit of effort) by species. The data are filtered to extract only records with valid start and end dates, to remove records with erroneous latitude and longitude values, and to only include records with hook and line gear with non-zero catch. Data include all records since 2008 after implementation of the Integrated Groundfish Management Plan (Table \@ref(tab:sql-cpue-spatial-ll)). CPUE is represented by landed catch in pieces per fishing event (set). Discards are not included in hook and line spatial CPUE because discarded pieces are not reliably recorded in all years. Species names are given as an argument to the gfplot function.

\begin{table}[htpb] \centering \caption{Description of filters in SQL queries extracting commercial hook and line spatial catch per unit effort (kg/set) from \texttt{GFFOS.GF_D_OFFICIAL_CATCH} with \texttt{get-cpue-spatial-ll.sql}} \label{tab:sql-cpue-spatial-ll} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filtered for \texttt{LAT} between 47.8 and 55 and \texttt{LON} between -135 and -122 & To remove erroneous location records \ Filtered for \texttt{YEAR(BEST_DATE)} greater than or equal to 2008 & To extract only records since 2008 after implementation of IFMP \ Filtered for \texttt{YEAR(START_DATE)} = \texttt{YEAR(END_DATE)} and \texttt{END_DATE} > \texttt{START_DATE} & To remove records with erroneous dates \ Filtered for \texttt{GEAR} IN \texttt{(HOOK AND LINE, LONGLINE, LONGLINE OR HOOK AND LINE)} & To extract only records in the hook and line fishery \ \bottomrule \end{tabu}} \end{table}

\vspace{0mm}

SURVEY CATCH DATA EXTRACTION

We extracted survey biomass index data get-survey-index.sql. Calculated bootstrapped biomass, year and survey series identification code (SSID) are filtered for active records of the calculated biomass in the database (Table \@ref(tab:sql-survey-index)). Species and SSID codes are given as arguments to the gfplot function.

\begin{table}[htp] \centering \caption{Description of filters in SQL queries extracting bootstrapped survey biomass index from \texttt{GFBio} with \texttt{get-survey-index}} \label{tab:sql-survey-index} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filter for \texttt{ACTIVE_IND} 1 & To extract only active (useable) bootstrapped index records \ \bottomrule \end{tabu}} \end{table}

\vspace{0mm}

BIOLOGICAL DATA EXTRACTION

We extracted biological data using get-survey-samples.sql and get-comm-samples.sql for research survey and commercial samples, respectively. Records of all biological samples are extracted by species, including available length, weight, age and maturity data. Standard length measurements differ by species (for example, rockfish and Pacific cod length are recorded as the length to where the tail forks, while Pacific halibut and arrowtooth flounder are recorded as total length to the end of the tail. Spotted ratfish were filtered for only lengths recorded as from the snout to the end of the second dorsal fin, which is the standard as their tails are often damaged) as there were some specimens where total length was recorded.

Records include available metadata including PFMA, fishery, gear type, SSID and survey identification code (SID, only available for research survey data), survey sampling types, and sampling protocol codes for maturity and ageing data. Data are filtered by the TRIP_SUBTYPE_CODE to extract either survey (Table \@ref(tab:sql-surv-samp)) or commercial (Table \@ref(tab:sql-comm-samp)) samples.

Some survey or commercial catches are deemed unuseable for analysis. For example, when gear is lost, faulty or damaged or all or a portion of the catch is lost then the full catch data are not available and the partial data may not be representative of the full catch. Data from unuseable catches are excluded in this report.

In addition, samples are designated as one of three sample descriptions based on combinations of two codes relating to sampling protocols: SPECIES_CATEGORY_CODE (Table \@ref(tab:spp-cat)) and SAMPLE_SOURCE_CODE (Table \@ref(tab:samp-source)). Samples can be designated as 'unsorted samples' in which data were collected for all specimens in the sample, or 'sorted samples' where specimens were sorted or selected into 'keepers', which were sampled, and 'discards' which were not sampled:

  1. Specimens with a SPECIES_CATEGORY_CODE of 0 are of unknown species category and are not usable. Those with a SPECIES_CATEGORY_CODE of 1 (unsorted) and a SAMPLE_SOURCE_CODE of 0 (unknown) or 1 (unsorted), or with a SPECIES_CATEGORY_CODE of 5 (remains) or 6 (fish heads only) and a SAMPLE_SOURCE_CODE of 1 (unsorted) are classified as 'unsorted'.

  2. Specimens with a SAMPLE_SOURCE_CODEof 2 (keepers) and a SPECIES_CATEGORY_CODE of 1 (unsorted), 2 (sorted) or 3 (keepers), or with a SPECIES_CATEGORY_CODE of 3 (keepers) and a SAMPLE_SOURCE_CODEof 1 (unsorted) are classified as 'keepers'.

  3. Specimens with a SPECIES_CATEGORY_CODE of 4 (discards) and a SAMPLE_SOURCE_CODE of 1 (unsorted) or 3 (discards), or a SAMPLE_SOURCE_CODEof 3 (discards) and a SPECIES_CATEGORY_CODE of 1 (unsorted) are 'discards'.

In the synopsis report, we are only including unsorted biological samples. Data are also filtered by SAMPLE_TYPE_CODE to extract only total or random samples and exclude samples selected by specified criteria.

Age data extracted with the biological sample queries are filtered by AGEING_METHOD_CODE to select current ageing methods verified with the ageing lab at the Pacific Biological Station in order to remove experimental ageing methods that may also be recorded in the database (Table \@ref(tab:aging-method-table)).

\vspace{0mm}

Maturity codes are assigned at the time of sampling following a chosen convention. The various conventions have different scales and classifications appropriate for different species or species groups. We worked with the survey staff, data team and biologists for the various taxa to select codes at and above which an individual fish is considered 'mature' in order to assign a maturity status to each specimen based on a combination of maturity convention, maturity code and sex (Table \@ref(tab:maturity-table)).

The ageing precision data are extracted with get-age-precision.sql. Data are filtered to bring in only records for which a secondary (precision) reading was performed by a different technician in addition to the primary reading (Table \@ref(tab:sql-age-precision)).

\begin{table}[htpb] \centering \caption{Description of filters in SQL queries extracting research survey sample data from GFBio with \texttt{get-survey-samples.sql}.} \label{tab:sql-surv-samp} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filtered for \texttt{TRIP_SUBTYPE_CODE} \texttt{2, 3} (research trips) & To extract only research data \ Filtered for \texttt{SAMPLE_TYPE_CODE} \texttt{1, 2, 6, 7, 8} (random or total) & To extract only those records of sample type 'random' or 'total' \ Filtered for \texttt{SPECIES_CATEGORY_CODE} \texttt{NULL, 0, 1, 3, 4, 5, 6, 7} & To remove samples sorted on unknown criteria \ Filtered for \texttt{SAMPLE_SOURCE_CODE} \texttt{NULL, 1, 2, 3} & To extract both sorted and unsorted samples for later filtration for desired analysis (removes stomach contents samples) \ \bottomrule \end{tabu}} \end{table}

\clearpage

\begin{table}[htpb] \centering \caption{Description of filters in SQL queries extracting commercial sample data from GFBio with \texttt{get-comm-samples.sql}.} \label{tab:sql-comm-samp} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filtered out \texttt{TRIP_SUBTYPE_CODE} \texttt{2, 3} (research trips) & To extract only commercial data \ Filtered for \texttt{SAMPLE_TYPE_CODE} \texttt{1, 2, 6, 7, 8} (random or total) & To extract only those records of sample type 'random' or 'total' \ Filtered for \texttt{SPECIES_CATEGORY_CODE} \texttt{NULL, 0, 1, 3, 4, 5, 6, 7} & To remove samples sorted on unknown criteria \ Filtered for \texttt{SAMPLE_SOURCE_CODE} \texttt{NULL, 1, 2, 3} & To extract both sorted and unsorted samples for later filtration for desired analysis (removes stomach contents samples) \ \bottomrule \end{tabu}} \end{table}

spp_cat <- readr::read_csv(here::here("report/report-rmd/tables/spp-category.csv"))
spp_cat$`Species Category Description` <-
  gfplot:::firstup(tolower(spp_cat$`Species Category Description`))
spp_cat$`Species Category Description` <-
  gsub("-", "--", spp_cat$`Species Category Description`)
spp_cat$`Species Category Description` <-
  gsub("unk\\.", "unknown", spp_cat$`Species Category Description`)
csasdown::csas_table(spp_cat, caption = "Species category codes lookup table, which
  describes sampling protocols at the catch level.")
samp_source <- readr::read_csv(here::here("report/report-rmd/tables/sample-source.csv"))
samp_source$`Sample Source Description` <-
  gfplot:::firstup(tolower(samp_source$`Sample Source Description`))
csasdown::csas_table(samp_source, caption = "Sample source codes lookup table, which
  describes sampling protocols at the sample level.")
library(readr)
f <- system.file("extdata", "ageing_methods.csv", package = "gfplot")
age_methods <- readr::read_csv(f, col_types = cols(
  species_common_name = col_character(),
  species_ageing_group = col_character(),
  species_code = col_character(),
  species_science_name = col_character(),
  ageing_method_codes = col_character()
))
age_methods <- filter(age_methods, !is.na(species_code))
age_methods$species_code[nchar(age_methods$species_code) == 2L] <- 
  paste0("0", age_methods$species_code[nchar(age_methods$species_code) == 2L])

names(age_methods) <- tolower(names(age_methods))
age_methods$species_common_name <- tolower(age_methods$species_common_name)
#age_methods <- left_join(age_methods, meta, by = "species_common_name")
age_methods$species_common_name <- gfsynopsis:::first_cap(age_methods$species_common_name)
# age_methods <- filter(age_methods, type == "A")
age_methods$species_common_name <- gsub("Rougheye/blackspotted Rockfish Complex", 
  "Rougheye/Blackspotted", age_methods$species_common_name)
age_methods$species_science_name <- gsub(" complex", "", age_methods$species_science_name)
age_methods$species_science_name <- gsub("sebastes aleutianus", "s. aleutianus",
  age_methods$species_science_name)
age_methods <- filter(age_methods, !is.na(species_ageing_group))
age_methods <- select(age_methods, species_common_name, species_science_name,
  species_code, ageing_method_codes)
age_methods <- arrange(age_methods, species_code)
age_methods$species_science_name <- paste0("*",
  gfplot:::firstup(age_methods$species_science_name), "*")
age_methods <- filter(age_methods, ageing_method_codes != "na")
age_methods <- filter(age_methods, !is.na(species_code))
names(age_methods) <- c("Common name", "Scientific name", "Species code", "Ageing codes")
age_methods$`Common name` <- gsub("C-o Sole", "C-O Sole", 
  age_methods$`Common name`)

\clearpage

csasdown::csas_table(age_methods, caption = "Ageing method codes from GFBio considered valid throughout the synopsis document for groundfish species in British Columbia. The acceptable ageing method codes for each species were chosen with the support of the PBS Schlerochronology Lab. 1 = 'Otolith Surface Only', 3 = 'Otolith Broken and Burnt', 4 = 'Otolith Burnt and Thin Sectioned', 6 = 'Dorsal Fin XS', 7 = 'Pectoral Fin', 11 = 'Dorsal Spine', 12 = 'Vertebrae', 16 = 'Otolith Surface and Broken and Burnt', 17 = 'Otolith Broken and Baked (Break and Bake)'.")

\clearpage

f <- system.file("extdata", "maturity_assignment.csv", package = "gfplot")
mat <- read.csv(f, stringsAsFactors = FALSE, strip.white = TRUE)
names(mat) <- tolower(names(mat))
names(mat) <- gsub("_", " ", names(mat))
names(mat) <- gfplot:::firstup(names(mat))
mat$`Maturity convention maxvalue` <- NULL
mat <- rename(mat, `Maturity convention description` = `Maturity convention desc`,
  `Mat. conv. code` = `Maturity convention code`)
mat$Sex <- if_else(mat$Sex == 1, "M", "F")

mat <- filter(mat, !`Maturity convention description` %in% "HAKE (AMR simplified)")
mat <- filter(mat, !`Maturity convention description` %in% "HAKE (1977+)")
mat <- filter(mat, !`Maturity convention description` %in% "HAKE (U.S.)")
mat <- filter(mat, !grepl("SABLEFISH", `Maturity convention description`))

csasdown::csas_table(mat, caption = "Maturity convention codes ('Mat. conv. code'), maturity convention descriptions, sex, and the maturity convention value at which a fish is deemed to be mature for the purposes of the synopsis report. Note that fish may be considered mature at other maturity convention values in particular stock assessments where other values are chosen for specific reasons.")

\begin{table}[htp] \centering \caption{Description of filters in SQL queries extracting all age records with a precision test reading to determine ageing precision from {GFBio} with \texttt{get_age_precision.sql}} \label{tab:sql-age-precision} {\tabulinesep=1.6mm \begin{tabu}{>{\raggedright\arraybackslash}m{2.8in}>{\raggedright\arraybackslash}m{3.2in}} \toprule Filters & Rationale \ \midrule Filter for \texttt{AGE_READING_TYPE_CODE} 2, 3 & To extract primary and precision test readings \ \bottomrule \end{tabu}} \end{table}

\clearpage

## If wanting .docx tables to work:
# pdf <- knitr:::is_latex_output()
# read.csv(here::here("report/report-rmd/tables/sql-comm-samp.csv"),
#   stringsAsFactors = FALSE, strip.white = TRUE) %>%
#   csas_table(format = if (pdf) "latex" else "pandoc", escape = FALSE) %>%
#   kableExtra::column_spec(1, width = "2.8in") %>%
#   kableExtra::column_spec(2, width = "3.2in")

DATA ACCESSIBILITY

Data from the Bottom Trawl Synoptic Surveys are available through the Open Government Data Portal. Hook and line survey data are currently being prepared for upload to the Open Data Portal. Commercial data will be uploaded in a rolled-up format in compliance with the Federal Privacy Act.

Requests for data held by DFO Pacific Region can be made through Pacific Fisheries Catch Statistics.



pbs-assess/gfsynopsis documentation built on March 26, 2024, 7:30 p.m.