View source: R/read_acttrust.R
read_acttrust | R Documentation |
read_acttrust()
allows you to read, tidy, and validate an ActTrust file in
a consistent and easy manner. You can see the output data structure in
?acttrust
.
ActTrust is a trademark of Condor Instruments Ltda.
read_acttrust(file = file.choose(), tz = "UTC", regularize = TRUE)
file |
A string with the file path for the ActTrust data. If not assigned, a dialog window will open allowing the user to browse and select a file. |
tz |
A string that specifies which time zone to parse the dates/time
with. The string must be a time zone that is recognized by the user's OS.
For more information see |
regularize |
(optional) a |
read_acttrust()
requires the readr
package. If
you don't already have it installed, you can install it with:
install.packages("readr")
Regularize
parameterIt's common to find some uneven epochs/intervals in ActTrust data files. This
occurs because the actigraph internal clock can go off by some seconds while
recording and can become an issue while doing some computations. By using
regularize == TRUE
, read_acttrust()
find and correct those
irregularities.
It's important to note that this process will only work if a clear epoch/periodicity can be found in the data. Regularization is made by aggregating the values between epochs, averaging values for numeric variables and assigning the most frequent value (mode) for single integer or other type of variables.
Any gap found in the time series will be assign as NA
, with a state
value of 9
.
read_acttrust()
will transform any offwrist data (data with
state == 4
) into missing data (NA
). They will still going to be
classified as offwrist in the state
variable.
The process of tiding a dataset is understood as transforming it in input data, like described in Loo and Jonge (2018). It's a very similar process of tiding data described in the workflow proposed by Wickham and Grolemund (n.d.).
The process of validating a dataset is understood as detecting invalid data, by checking whether data satisfies certain assumptions from domain knowledge, to then, removing or, if possible, fixing them. This process can be considered as part of the process of transforming data, described in the workflow proposed by Wickham and Grolemund (n.d.).
To learn more about the concept of tidy data, see Wickham (2014) and Wickham and Grolemund (n.d.). You can find more about data validation and error location in Loo and Jonge (2018).
A tsibble object. The data structure can be
found in ?acttrust
.
van der Loo, M., & de Jonge, E. (2018). Statistical data cleaning with applications in R. John Wiley & Sons. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1002/9781118897126")}.
Wickham, H. (2014). Tidy Data. Journal of Statistical Software, 59(10), 1-23. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v059.i10")}.
Wickham, H., & Grolemund, G. (n.d.). R for data science. (n.p.). https://r4ds.had.co.nz
Other utility functions:
aggregate_index()
,
find_epoch()
,
raw_data()
,
write_acttrust()
read_acttrust(raw_data("acttrust.txt"))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.