View source: R/ncc_correlate.R
ncc_correlate | R Documentation |
The function creates a cross correlation time series of two input data
sets. The input data is cut into overlapping snippets and, optionally,
further smaller sub snippets for averaging the results per time snippet.
The data can be made subject to a series of optional preprocessing steps,
i.e. be aggregated, deconvolved, filtered, cut by standard deviation, and
sign-cut. the cross correlation function is calculated and returned for a
defined lag time window. The output of the function is supposed to be
used as input for the function ncc_process()
.
ncc_correlate(
start,
stop,
ID,
component,
dir,
window,
overlap = 0,
window_sub,
lag,
dt,
deconvolve = FALSE,
f,
pick = FALSE,
whiten = FALSE,
sd,
sign = FALSE,
cpu,
buffer = 0.05,
eseis = TRUE,
...
)
start |
|
stop |
|
ID |
|
component |
|
dir |
|
window |
|
overlap |
|
window_sub |
|
lag |
|
dt |
|
deconvolve |
|
f |
|
pick |
|
whiten |
|
sd |
|
sign |
|
cpu |
|
buffer |
|
eseis |
|
... |
Further arguments passed to the functions
|
The sampling interval (dt
must be defined). It is wise to set it
to more than twice the filter's higher corner frequency (f[2]
).
Aggregation is recommended to improve computational efficiency, but is
mandatory if data sets of different sampling intervals are to be analysed.
In that case, it must be possible to aggregate the data sets to the
provided aggregation sampling interval (dt
), otherwise an error
will arise. As an example, if the two data sets have sampling intervals of
1/200
and 1/500
, the highest possible aggregated sampling
interval is 1/100
. See aux_commondt()
for further
information.
The function supports parallel processing. However, keep in mind that calculating the cross correlation functions for large data sets and large windows will draw serious amounts of memory. For example, a 24 h window of two seismic signals recorded at 200 Hz will easily use 15 GB of RAM. Combining this with parallel processing will multiply that memory size. Therefore, it is better think before going for too high ambitions, and check how the computer system statistics evolve with increasing windows and parallel operations.
Deconvolution is recommended if different station hardware and setup is used for the stations to analyse (i.e., different sensors, loggers or gain factors).
To account for biases due to brief events in the signals, the data sets
can be truncated (cut) in their amplitude. This cutting can either be
done based on the data sets' standard deviations (or a multiplicator of
those standard deviations, see signal_cut()
for further details),
using the argument sd = 1
, for one standard deviation.
Alternatively, the data can also be cut by their sign (i.e., positive and
negative values will be converted to 1
and -1
, respectively).
List
with spectrogram matrix, time and frequency vectors.
Michael Dietze
## Not run:
## calculate correlogram
cc <- ncc_correlate(start = "2017-04-09 00:30:00",
stop = "2017-04-09 01:30:00",
ID = c("RUEG1", "RUEG2"),
dt = 1/10,
component = c("Z", "Z"),
dir = paste0(system.file("extdata",
package = "eseis"), "/"),
window = 600,
overlap = 0,
lag = 20,
deconvolve = TRUE,
sensor = "TC120s",
logger = "Cube3extBOB",
gain = 1,
f = c(0.05, 0.1),
sd = 1)
## plot output
plot_correlogram(cc)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.