duplicate_columns: Duplicate columns test

View source: R/unibern.R

duplicate_columnsR Documentation

Duplicate columns test

Description

Looks for data that have been digitized twice by mistake. For sud-daily data, this is done by looking for series of zero differences between adjacent observation times. For daily data, by looking for series of zero differences between the same days of adjacent months.

Usage

duplicate_columns(Data, meta = NULL, outpath, ndays = 5)

Arguments

Data

A character string giving the path of the input file, or a matrix with 5 (7) columns for daily (sub-daily) data: variable code, year, month, day, (hour), (minute), value.

meta

A character vector with 6 elements: station ID, latitude, longitude, altitude, variable code, units. If Data is a path, meta is ignored.

outpath

Character string giving the path for the QC results.

ndays

Number of consecutive days with zero difference required to flag the data. The default is 5.

Details

The input file must follow the Copernicus Station Exchange Format (SEF). This function works with any numerical variable.

Zeroes are automatically excluded in bounded variables such as precipitation.

Author(s)

Yuri Brugnara

Examples

climatic_outliers(Rosario$Tn, Meta$Tn, outpath = tempdir(), ndays = 3) 


c3s-data-rescue-service/dataresqc documentation built on April 10, 2023, 4:18 p.m.