knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) options(tidyverse.quiet = TRUE)
Loggers and field equipement provide many different datasets format.
The readr::read_delim
[@readr2024] function works perfectly in most cases, given that the data are in a delimited file.
We provide here examples on how to use read_delim
to prepare your raw data files for the Fluxible R package.
For users of Li-COR instruments, the licoread R package, developped in collaboration with Li-COR, is designed to read 82z and 81x files and import them as fluxible-friendly objets.
The first function to use when processing ecosystem gas fluxes data with Fluxible is flux_match
, which requires two inputs: row_conc
and field_record
.
The columns in both files do not require specific names, and those will be kept during the entire workflow.
We however advise that those names do not contain spaces or special characters.
raw_conc
The input raw_conc
is the file with the gas concentration measured over time, typically the file exported by the logger or instrument, and needs to fulfill the following criteria:
yyyy-mm-dd hh:mm:ss
) corresponding to each data point of gas concentrationfield_record
The input field_record
is the metadata file that contains the information of when which sample or plot was measured.
It can also provide other meta data, such as campaign, site, type of measurement, etc.
This file should contain:
yyyy-mm-dd hh:mm:ss
)yyyy-mm-dd hh:mm:ss
)flux_match
The flux_match
function only intends to attribute a unique flux_id
to each measurement and slice out recordings in between measurements.
Depending on your setup, this step might not be necessary.
The flux_fitting
function is the step after flux_match
and its input should check the following points:
yyyy-mm-dd hh:mm:ss
) corresponding to each gas concentration data pointsyyyy-mm-dd hh:mm:ss
)yyyy-mm-dd hh:mm:ss
)Fluxible treats the entire dataset homogeneously in terms of choice of model to fit the data and quality control. This is because every step in this process has a risk of adding a bias to the final data (for example, a linear fit tends to under estimate fluxes while an exponential one tends to over estimate fluxes). By treating the data homogeneously, the bias are consistent and therefore will affect further analysis less. Measurements done under similar conditions and with the same equipment should ideally be processed together.
Note that the "flux after flux" approach, treating each measurement individually in terms of fit, cut and quality control, is also possible (but this is not what Fluxible is optimised for!).
It would require looping the flux_fitting
and flux_quality
(optionally flux_plot
too for visualization) functions on each flux ID with a prompt asking for the arguments.
In this example we will import the file 26124054001.#00
, which is a text file extracted from a Squirrel Data Logger 2010 Series (Grant Instruments) through SquirrelView software.
The first thing to do when importing a file with read_delim
is to open the file in a text editor to look at its structure.
{#fig-header}
\
We will read the file with read_delim
, and then use rename
and mutate
[from the dplyr
package \; @dplyr2023] to transform the columns into what we want, and dmy_hms
from the lubridate
package [@lubridate2011] to get our datetime column in the right format:
library(tidyverse) # readr, dplyr and lubridate are part of tidyverse raw_conc <- read_delim( "ex_data/26124054001.#00", delim = ",", # our file is comma separated skip = 25 # the first 25 rows are logger infos that we do not want to keep )
raw_conc
structure:
str(raw_conc, width = 70, strict.width = "cut", give.attr = FALSE)
Not too bad... but we are not quite there yet:
Type
(nothing to do with the type of measurement, something from the logger), CO2 (V)
, H2O (V)
(those two are the voltage input to the logger, not needed), and H2O_calc (ppt)
(that one was not calibrated for this campaign so better remove it to avoid confusion)Date
and Time
columns should be united in one and transformed in yyyy-mm-dd hh:mm:ss
formatraw_conc <- raw_conc |> rename( co2_conc = "CO2_calc (ppm)" ) |> mutate( datetime = paste0(Date, Time), # we paste date and time together datetime = dmy_hms(datetime) # datetime instead of character ) |> select(datetime, co2_conc)
Et voilĂ :
str(raw_conc, width = 70, strict.width = "cut", give.attr = FALSE)
Quite often a field season will result in several files. In this example we will read all the files in "ex_data/" that contain "CO2" in their names.
library(tidyverse) raw_conc <- list.files( # list the files "ex_data", # at location "ex_data" full.names = TRUE, pattern = "*CO2*" # that contains "CO2" in their name ) |> map_dfr( read_csv, # we map read_csv on all the files na = c("#N/A", "Over") # "#N/A" and Over should be treated as NA ) |> rename( conc = "CO2 (ppm)" ) |> mutate( datetime = dmy_hms(`Date/Time`) ) |> select(datetime, conc)
Fluxible is designed to process data that were measured continuously (in a single or several files) and a field_record
that records what was measured when.
Another strategy while measuring gas fluxes on the field is to create a new file for each measurement, with the file name as the flux ID.
The approach is similar to reading multiple files, except we add a column with the file name, and can then by-pass flux_match
.
library(tidyverse) raw_conc <- list.files( #listing all the files "ex_data/field_campaign", # at location "ex_data/field_campaign" full.names = TRUE ) |> map_dfr( # we map read_tsv on all the files # read_tsv is the version of read_delim for tab separated value files read_tsv, skip = 3, # creates a column with the filename, that we can use as flux ID id = "filename" ) |> rename( # a bit of renaming to make the columns more practical co2_conc = "CO2 (umol/mol)", h2o_conc = "H2O (mmol/mol)", air_temp = "Temperature (C)", pressure = "Pressure (kPa)" ) |> mutate( datetime = paste(Date, Time), # we get rid of the milliseconds datetime = as.POSIXct(datetime, format="%Y-%m-%d %H:%M:%OS"), pressure = pressure / 101.325, # conversion from kPa to atm filename = basename(filename) # removing folder names ) |> select(datetime, co2_conc, h2o_conc, air_temp, pressure, filename)
raw_conc
structure:
str(raw_conc, width = 70, strict.width = "cut", give.attr = FALSE)
What happens when you extract a logger file in csv using a computer with settings using comma as a decimal point (which is quite standard in Europe)? Well, you get a comma separated values (csv) file, with decimals separated by... comma.
Ideally the file should have been extracted in European csv, which means comma for decimals and semi-colon as column separator. But here we are.
{#fig-tricky}
\ Let's try the usual way first:
library(tidyverse) raw_conc <- read_csv( # read_csv is the same as read_delim(delim = ",") "ex_data/011023001.#01", col_types = "Tcdddddd", na = "#N/A" # we tell read_csv what NA look like in that file )
str(raw_conc, width = 70, strict.width = "cut", give.attr = FALSE)
It took the column names right, but then of course interpreted all comma as separators, and made a mess. Let's see if we can skipped the header and then assemble the columns with left and right side of the decimal point:
raw_conc <- read_csv( "ex_data/011023001.#01", skip = 1, # this time we skip the row with the column names col_names = FALSE, # we tell read_csv that column names are not provided na = "#N/A" # we tell read_csv what NA looks like in that file )
str(raw_conc, width = 70, strict.width = "cut", give.attr = FALSE)
The problem now is that CO~2~ concentration was measured every second (with a comma!), while other variable were measured every 10 seconds. That means every 10th row has 14 comma separated elements, while the others have only 10. Uhhhhhhhhh
At this point, you might want to get the field computer out again and re extract your raw file with a European csv output, or anything that is not comma separated, or set the decimal point as a... point. But for the sake of it, let's pretend that it is not an option and solve that issue in R:
# we read each row of our file as an element of a list lines <- readLines("ex_data/011023001.#01") lines <- lines[-1] # removing the first element with the column names # we first deal with the elements where we have those environmental data # that were measured every 10 seconds linesenv <- lines[seq(1, length(lines), 10)] env_df <- read.csv( textConnection(linesenv), # we read the list into a csv header = FALSE, # there is no header colClasses = rep("character", 14) # specifying that those columns are character is important # if read as integer, 06 becomes 6, and when putting columns together, # 400.06 will be read as 400.6, which is wrong ) env_df <- env_df |> mutate( datetime = dmy_hms(V1), temp_air = paste( V7, # V7 contains the left side of the decimal point V8, # V8 the right side sep = "." # this time we put it in american format ), temp_air = as.double(temp_air), # now we can make it a double temp_soil = as.double(paste(V9, V10, sep = ".")), co2_conc = as.double(paste(V11, V12, sep = ".")), PAR = as.double(paste(V13, V14, sep = ".")) ) |> select(datetime, temp_air, temp_soil, co2_conc, PAR) # now we do the same with the other elements of the list lines_other <- lines[-seq(1, length(lines), 10)] other_df <- read.csv( textConnection(lines_other), header = FALSE, colClasses = rep("character", 10) ) other_df <- other_df |> mutate( datetime = dmy_hms(V1), co2_conc = as.double(paste(V8, V9, sep = ".")) ) |> select(datetime, co2_conc) # and finally we do a full join with both conc_df <- bind_rows(env_df, other_df) |> arrange(datetime) # I like my dataframes in chronological order
Et voilĂ :
str(conc_df, width = 70, strict.width = "cut", give.attr = FALSE)
That was a strange mix of tidyverse and base R, and I would definitely try to do some plots to check if the data are making sense (number around 420 are most likely CO~2~ concentration, those between 5 and 20 probably temperature, and soil temperature should be lower than air temperature). But it worked...
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.