knitr::opts_chunk$set(echo = TRUE)

Why is it necessary to parse the variance with partial moments? The additional information generated from partial moments permits a level of analysis simply not possible with traditional summary statistics.

Below are some basic equivalences demonstrating partial moments role as the elements of variance.

library(NNS) set.seed(123) ; x = rnorm(100) ; y = rnorm(100) mean(x) UPM(1, 0, x) - LPM(1, 0, x)

var(x) # Sample Variance: UPM(2, mean(x), x) + LPM(2, mean(x), x) # Population Variance: (UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1)) # Variance is also the co-variance of itself: (Co.LPM(1, x, x, mean(x), mean(x)) + Co.UPM(1, x, x, mean(x), mean(x)) - D.LPM(1, 1, x, x, mean(x), mean(x)) - D.UPM(1, 1, x, x, mean(x), mean(x))) * (length(x) / (length(x) - 1))

sd(x) ((UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1))) ^ .5

The first 4 moments are returned with the function `NNS.moments`

. For sample statistics, set `population = FALSE`

.

NNS.moments(x) NNS.moments(x, population = FALSE)

`NNS.mode`

offers support for discrete valued distributions as well as recognizing multiple modes.

# Continuous NNS.mode(x) # Discrete and multiple modes NNS.mode(c(1, 2, 2, 3, 3, 4, 4, 5), discrete = TRUE, multi = TRUE)

cov(x, y) (Co.LPM(1, x, y, mean(x), mean(y)) + Co.UPM(1, x, y, mean(x), mean(y)) - D.LPM(1, 1, x, y, mean(x), mean(y)) - D.UPM(1, 1, x, y, mean(x), mean(y))) * (length(x) / (length(x) - 1))

The covariance matrix $(\Sigma)$ is equal to the sum of the co-partial moments matrices less the divergent partial moments matrices. $$ \Sigma = CLPM + CUPM - DLPM - DUPM $$

PM.matrix(LPM_degree = 1, UPM_degree = 1,target = 'mean', variable = cbind(x, y), pop_adj = TRUE) # Standard Covariance Matrix cov(cbind(x, y))

cor(x, y) cov.xy = (Co.LPM(1, x, y, mean(x), mean(y)) + Co.UPM(1, x, y, mean(x), mean(y)) - D.LPM(1, 1, x, y, mean(x), mean(y)) - D.UPM(1, 1, x, y, mean(x), mean(y))) * (length(x) / (length(x) - 1)) sd.x = ((UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1))) ^ .5 sd.y = ((UPM(2, mean(y), y) + LPM(2, mean(y) , y)) * (length(y) / (length(y) - 1))) ^ .5 cov.xy / (sd.x * sd.y)

P = ecdf(x) P(0) ; P(1) LPM(0, 0, x) ; LPM(0, 1, x) # Vectorized targets: LPM(0, c(0, 1), x) plot(ecdf(x)) points(sort(x), LPM(0, sort(x), x), col = "red") legend("left", legend = c("ecdf", "LPM.CDF"), fill = c("black", "red"), border = NA, bty = "n") # Joint CDF: Co.LPM(0, x, y, 0, 0) # Vectorized targets: Co.LPM(0, x, y, c(0, 1), c(0, 1)) # Continuous CDF: NNS.CDF(x, 1) # CDF with target: NNS.CDF(x, 1, target = mean(x)) # Survival Function: NNS.CDF(x, 1, type = "survival")

```
NNS.PDF(x)
```

Partial moments are asymptotic area approximations of $f(x)$ akin to the familiar Trapezoidal and Simpson's rules. More observations, more accuracy...

$$[UPM(1,0,f(x))-LPM(1,0,f(x))]\asymp\frac{[F(b)-F(a)]}{[b-a]}$$ $$[UPM(1,0,f(x))-LPM(1,0,f(x))] *[b-a] \asymp[F(b)-F(a)]$$

x = seq(0, 1, .001) ; y = x ^ 2 (UPM(1, 0, y) - LPM(1, 0, y)) * (1 - 0)

$$0.3333 * [1-0] = \int_{0}^{1} x^2 dx$$ For the total area, not just the definite integral, simply sum the partial moments and multiply by $[b - a]$: $$[UPM(1,0,f(x))+LPM(1,0,f(x))] *[b-a]\asymp\left\lvert{\int_{a}^{b} f(x)dx}\right\rvert$$

For example, when ascertaining the probability of an increase in $A$ given an increase in $B$, the `Co.UPM(degree_x, degree_y, x, y, target_x, target_y)`

target parameters are set to `target_x = 0`

and `target_y = 0`

and the `UPM(degree, target, variable)`

target parameter is also set to `target = 0`

.

$$P(A|B)=\frac{Co.UPM(0,0,A,B,0,0)}{UPM(0,0,B)}$$

If the user is so motivated, detailed arguments and proofs are provided within the following:

**Any scripts or data that you put into this service are public.**

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.