knitr::opts_chunk$set(echo = TRUE) library(NNS) library(data.table) data.table::setDTthreads(2L) options(mc.cores = 1) Sys.setenv("OMP_THREAD_LIMIT" = 2)
library(NNS) library(data.table) require(knitr) require(rgl)
The limitations of linear correlation are well known. Often one uses correlation, when dependence is the intended measure for defining the relationship between variables. NNS dependence NNS.dep
is a signal:noise measure robust to nonlinear signals.
Below are some examples comparing NNS correlation NNS.cor
and NNS.dep
with the standard Pearson's correlation coefficient cor
.
Note the fact that all observations occupy the co-partial moment quadrants.
x = seq(0, 3, .01) ; y = 2 * x
NNS.part(x, y, Voronoi = TRUE, order = 3)
cor(x, y) NNS.dep(x, y)
Note the fact that all observations occupy the co-partial moment quadrants.
x = seq(0, 3, .01) ; y = x ^ 10
NNS.part(x, y, Voronoi = TRUE, order = 3)
cor(x, y) NNS.dep(x, y)
Even the difficult inflection points, which span both the co- and divergent partial moment quadrants, are properly compensated for in NNS.dep
.
x = seq(0, 12*pi, pi/100) ; y = sin(x)
NNS.part(x, y, Voronoi = TRUE, order = 3, obs.req = 0)
cor(x, y) NNS.dep(x, y)
The asymmetrical analysis is critical for further determining a causal path between variables which should be identifiable, i.e., it is asymmetrical in causes and effects.
The previous cyclic example visually highlights the asymmetry of dependence between the variables, which can be confirmed using NNS.dep(..., asym = TRUE)
.
cor(x, y) NNS.dep(x, y, asym = TRUE)
cor(y, x) NNS.dep(y, x, asym = TRUE)
Note the fact that all observations occupy only co- or divergent partial moment quadrants for a given subquadrant.
set.seed(123) df <- data.frame(x = runif(10000, -1, 1), y = runif(10000, -1, 1)) df <- subset(df, (x ^ 2 + y ^ 2 <= 1 & x ^ 2 + y ^ 2 >= 0.95))
NNS.part(df$x, df$y, Voronoi = TRUE, order = 3, obs.req = 0)
NNS.dep(df$x, df$y)
NNS.dep()
p-values and confidence intervals can be obtained from sampling random permutations of $y \rightarrow y_p$ and running NNS.dep(x,$y_p$)
to compare against a null hypothesis of 0 correlation, or independence between $(x, y)$.
Simply set NNS.dep(..., p.value = TRUE, print.map = TRUE)
to run 100 permutations and plot the results.
## p-values for [NNS.dep] set.seed(123) x <- seq(-5, 5, .1); y <- x^2 + rnorm(length(x))
NNS.part(x, y, Voronoi = TRUE, order = 3)
NNS.dep(x, y, p.value = TRUE, print.map = TRUE)
NNS.copula()
These partial moment insights permit us to extend the analysis to multivariate instances and deliver a dependence measure $(D)$ such that $D \in [0,1]$. This level of analysis is simply impossible with Pearson or other rank based correlation methods, which are restricted to bivariate cases.
set.seed(123) x <- rnorm(1000); y <- rnorm(1000); z <- rnorm(1000) NNS.copula(cbind(x, y, z), plot = TRUE, independence.overlay = TRUE)
If the user is so motivated, detailed arguments and proofs are provided within the following:
Deriving Nonlinear Correlation Coefficients from Partial Moments
Beyond Correlation: Using the Elements of Variance for Conditional Means and Probabilities
Sys.setenv("OMP_THREAD_LIMIT" = "")
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.