hpaDist: Probabilities and Moments Hermite Polynomial Approximation

hpaDistR Documentation

Probabilities and Moments Hermite Polynomial Approximation

Description

Approximation of truncated, marginal and conditional densities, moments and cumulative probabilities of multivariate distributions via Hermite polynomial based approach proposed by Gallant and Nychka in 1987.

Density approximating function is scale adjusted product of two terms. The first one is squared multivariate polynomial of pol_degrees degrees with pol_coefficients coefficients vector. The second is product of independent normal random variables' densities with expected values and standard deviations given by mean and sd vectors correspondingly. Approximating function satisfies properties of density function thus generating a broad family of distributions. Characteristics of these distributions (moments, quantiles, probabilities and so on) may provide accurate approximations to characteristic of other distributions. Moreover it is usually possible to provide arbitrary close approximation by the means of polynomial degrees increase.

Usage

dhpa(
  x,
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

phpa(
  x,
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

ihpa(
  x_lower = numeric(0),
  x_upper = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

ehpa(
  x = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  expectation_powers = numeric(0),
  is_parallel = FALSE,
  is_validation = TRUE
)

etrhpa(
  tr_left = numeric(0),
  tr_right = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  expectation_powers = numeric(0),
  is_parallel = FALSE,
  is_validation = TRUE
)

dtrhpa(
  x,
  tr_left = numeric(0),
  tr_right = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

itrhpa(
  x_lower = numeric(0),
  x_upper = numeric(0),
  tr_left = numeric(0),
  tr_right = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

dhpaDiff(
  x,
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  type = "pol_coefficients",
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

ehpaDiff(
  x = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  expectation_powers = numeric(0),
  type = "pol_coefficients",
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

ihpaDiff(
  x_lower = numeric(0),
  x_upper = numeric(0),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0),
  type = "pol_coefficients",
  is_parallel = FALSE,
  log = FALSE,
  is_validation = TRUE
)

qhpa(
  p,
  x = matrix(1, 1),
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  given_ind = numeric(0),
  omit_ind = numeric(0),
  mean = numeric(0),
  sd = numeric(0)
)

rhpa(
  n,
  pol_coefficients = numeric(0),
  pol_degrees = numeric(0),
  mean = numeric(0),
  sd = numeric(0)
)

Arguments

x

numeric matrix of function arguments and conditional values. Note that x rows are points (observations) while random vectors components (variables) are columns.

pol_coefficients

numeric vector of polynomial coefficients.

pol_degrees

non-negative integer vector of polynomial degrees (orders).

given_ind

logical or numeric vector indicating whether corresponding random vector component is conditioned. By default it is a logical vector of FALSE values. If give_ind[i] equals TRUE or i then i-th column of x matrix will contain conditional values.

omit_ind

logical or numeric vector indicating whether corresponding random component is omitted. By default it is a logical vector of FALSE values. If omit_ind[i] equals TRUE or i then values in i-th column of x matrix will be ignored.

mean

numeric vector of expected values.

sd

positive numeric vector of standard deviations.

is_parallel

if TRUE then multiple cores will be used for some calculations. It usually provides speed advantage for large enough samples (about more than 1000 observations).

log

logical; if TRUE then probabilities p are given as log(p) or derivatives will be given respect to log(p).

is_validation

logical value indicating whether function input arguments should be validated. Set it to FALSE for slight performance boost (default value is TRUE).

x_lower

numeric matrix of lower integration limits. Note that x_lower rows are observations while variables are columns.

x_upper

numeric matrix of upper integration limits. Note that x_upper rows are observations while variables are columns.

expectation_powers

integer vector of random vector components powers.

tr_left

numeric matrix of left (lower) truncation limits. Note that tr_left rows are observations while variables are columns. If tr_left and tr_right are single row matrices then the same truncation limits will be applied to all observations that are determined by the first rows of these matrices.

tr_right

numeric matrix of right (upper) truncation limits. Note that tr_right rows are observations while variables are columns. If tr_left and tr_right are single row matrices then the same truncation limits will be applied to all observations that are determined by the first rows of these matrices.

type

determines the partial derivatives to be included into the gradient. If type="pol_coefficients" then gradient will contain partial derivatives respect to polynomial coefficients listed in the same order as pol_coefficients. Other available types are type = "mean" and type = "sd". For function dhpaDiff it is possible to take gradient respect to the x points setting type="x". For function ihpaDiff it is possible to take gradient respect to the x lower and upper points setting type = "x_lower" or type = "upper" correspondingly. In order to get full gradient please set type="all".

p

numeric vector of probabilities

n

positive integer representing the number of observations

Details

It is possible to approximate densities dhpa, cumulative probabilities phpa, ihpa, moments ehpa as well as their truncated dtrhpa, itrhpa, etrhpa forms and gradients dhpaDiff, ihpaDiff. Note that phpa is special of ihpa where x corresponds to x_upper while x_lower is matrix of negative infinity values. So phpa intended to approximate cumulative distribution functions while ihpa approximates probabilities that random vector components will be between values determined by rows of x_lower and x_upper matrices. Further details are given below.

Since density approximating function is non-negative and integrates to 1 it is density function for some m-variate random vector \xi. Approximating function f_{\xi }(x) has the following form:

f_{\xi }(x) = f_{\xi }(x;\mu, \sigma, \alpha) = \frac{1}{\psi }\prod\limits_{t=1}^{m}\phi ({x}_{t};{\mu }_{t},{\sigma }_{t}){{\left( \sum\limits_{{i}_{1}=0}^{{K}_{1}} {...}\sum\limits_{{i}_{m}=0}^{{K}_{m}}{{{\alpha }_{({{i}_{1}},...,{{i}_{m}}) }}\prod\limits_{r=1}^{m}x_{r}^{{{i}_{r}}}} \right)}^{2}}

\psi =\sum\limits_{{i}_{1}=0}^{{K}_{1}}{...}\sum \limits_{{i}_{m}=0}^{{K}_{m}}{\sum\limits_{{j}_{1}=0}^{{K}_{1}} {...}\sum\limits_{{j}_{m}=0}^{{K}_{m}}{{{\alpha }_{({i}_{1}, \cdots,{i}_{m})}}{{\alpha }_{({j}_{1},\cdots,{j}_{m})}}\prod \limits_{r=1}^{m}\mathcal{M}({i}_{r}+{j}_{r};{{\mu }_{r}},{\sigma }_{r})}},

where:

x = (x_{1},...x_{m}) - is vector of arguments i.e. rows of x matrix in dhpa.

{\alpha }_{({i}_{1},\cdots,{i}_{m})} - is polynomial coefficient corresponding to pol_coefficients[k] element. In order to investigate correspondence between k and ({i}_{1},\cdots,{i}_{m}) values please see 'Examples' section below or polynomialIndex function 'Details', 'Value' and 'Examples' sections. Note that if m=1 then pol_coefficients[k] simply corresponds to \alpha_{k-1}.

(K_{1},...,K_{m}) - are polynomial degrees (orders) provided via pol_degrees argument so pol_degrees[i] determines K_{i}.

\phi (.;{\mu }_{t},{\sigma }_{t}) - is normal random variable density function where \mu_{t} and \sigma_{t} are mean and standard deviation determined by mean[t] and sd[t] arguments values.

\mathcal{M}(q;{{\mu }_{r}},{\sigma }_{r}) - is q-th order moment of normal random variable with mean {\mu }_{r} and standard deviation {\sigma }_{r}. Note that function normalMoment allows to calculate and differentiate normal random variable's moments.

\psi - constant term insuring that f_{\xi }(x) is density function.

Therefore dhpa allows to calculate f_{\xi}(x) values at points determined by rows of x matrix given polynomial degrees pol_degrees (K) as well as mean (\mu), sd (\sigma) and pol_coefficients (\alpha) parameters values. Note that mean, sd and pol_degrees are m-variate vectors while pol_coefficients has prod(pol_degrees + 1) elements.

Cumulative probabilities could be approximated as follows:

P\left(\underline{x}_{1}\leq\xi_{1}\leq\overline{x}_{1},..., \underline{x}_{m}\leq\xi_{m}\leq\overline{x}_{m}\right) =

= \bar{F}_{\xi}(\underline{x},\bar{x}) = \bar{F}_{\xi}(\underline{x},\bar{x};\mu, \sigma, \alpha) = \frac{1}{\psi } \prod\limits_{t=1}^{m}(\Phi ({{{\bar{x}}}_{t}};{{\mu }_{t}}, {{\sigma }_{t}})-\Phi ({{{\underline{x}}}_{t}};{{\mu }_{t}}, {{\sigma }_{t}})) *

* \sum\limits_{{{i}_{1}}=0}^{{{K}_{1}}}{...} \sum\limits_{{{i}_{m}}=0}^{{{K}_{m}}}{\sum\limits_{{{j}_{1}}=0}^{{{K}_{1}}} {...}\sum\limits_{{{j}_{m}}=0}^{{{K}_{m}}} {{{\alpha }_{({{i}_{1}},...,{{i}_{m}})}}{{\alpha }_{({{j}_{1}},...,{{j}_{m}}) }}}}\prod\limits_{r=1}^{m}\mathcal{M}_{TR}\left({i}_{r}+{j}_{r}; \underline{x}_{r},\overline{x}_{r},\mu_{r},\sigma_{r}\right)

where:

\Phi (.;{\mu }_{t},{\sigma }_{t}) - is normal random variable's cumulative distribution function where \mu_{t} and \sigma_{t} are mean and standard deviation determined by mean[t] and sd[t] arguments values.

\mathcal{M}_{TR}(q; \underline{x}_{r},\overline{x}_{r},\mu_{r},\sigma_{r}) - is q-th order moment of truncated (from above by \overline{x}_{r} and from below by \underline{x}_{r}) normal random variable with mean {\mu }_{r} and standard deviation {\sigma }_{r}. Note that function truncatedNormalMoment allows to calculate and differentiate truncated normal random variable's moments.

\overline{x} = (\overline{x}_{1},...,\overline{x}_{m}) - vector of upper integration limits i.e. rows of x_upper matrix in ihpa.

\underline{x} = (\underline{x}_{1},...,\underline{x}_{m}) - vector of lower integration limits i.e. rows of x_lower matrix in ihpa.

Therefore ihpa allows to calculate interval distribution function \bar{F}_{\xi}(\underline{x},\bar{x}) values at points determined by rows of x_lower (\underline{x}) and x_upper (\overline{x}) matrices. The rest of the arguments are similar to dhpa.

Expected value powered product approximation is as follows:

E\left( \prod\limits_{t=1}^{m}\xi_{t}^{{{k}_{t}}} \right)= \frac{1}{\psi }\sum\limits_{{{i}_{1}}=0}^{{{K}_{1}}}{...} \sum\limits_{{{i}_{m}}=0}^{{{K}_{m}}} {\sum\limits_{{{j}_{1}}=0}^{{{K}_{1}}}{...} \sum\limits_{{{j}_{m}}=0}^{{{K}_{m}}} {{{\alpha }_{({{i}_{1}},...,{{i}_{m}})}} {{\alpha }_{({{j}_{1}},...,{{j}_{m}})}}}} \prod\limits_{r=1}^{m}\mathcal{M}({{i}_{r}}+{{j}_{r}}+{{k}_{t}}; {{\mu }_{r}},{{\sigma }_{r}})

where (k_{1},...,k_{m}) are integer powers determined by expectation_powers argument of ehpa so expectation_powers[t] assigns k_{t}. Note that argument x in ehpa allows to determined conditional values.

Expanding polynomial degrees (K_{1},...,K_{m}) it is possible to provide arbitrary close approximation to density of some m-variate random vector \xi^{\star}. So actually f_{\xi}(x) approximates f_{\xi^{\star}}(x). Accurate approximation requires appropriate mean, sd and pol_coefficients values selection. In order to get sample estimates of these parameters please apply hpaML function.

In order to perform calculation for marginal distribution of some \xi components please provide omitted components via omit_ind argument. For examples if ones assume m=5-variate distribution and wants to deal with 1-st, 3-rd, and 5-th components only i.e. (\xi_{1},\xi_{3},\xi_{5}) then set omit_ind = c(FALSE, TRUE, FALSE, TRUE, FALSE) indicating that \xi_{2} and \xi_{4} should be 'omitted' from \xi since 2-nd and 4-th values of omit_ind are TRUE. Then x still should be 5 column matrix but values in 2-nd and 4-th columns will not affect calculation results. Meanwhile note that marginal distribution of t components of \xi usually do not coincide with any marginal distribution generated by t-variate density approximating function.

In order to perform calculation for conditional distribution i.e. given fixed values for some \xi components please provide these components via given_ind argument. For example if ones assume m=5-variate distribution and wants to deal with 1-st, 3-rd, and 5-th components given fixed values (suppose 8 and 10) for the other two components i.e. (\xi|\xi_{2} = 8, \xi_{4} = 10) then set given_ind = c(FALSE, TRUE, FALSE, TRUE, FALSE) and x[2] = 8, x[4] = 10 where for simplicity it is assumed that x is single row 5 column matrix; it is possible to provide different conditional values for the same components simply setting different values to different x rows.

Note that it is possible to combine given_ind and omit_ind arguments. However it is wrong to set both given_ind[i] and omit_ind[i] to TRUE. Also at least one value should be FALSE both for given_ind and omit_ind.

In order to consider truncated distribution of \xi i.e. \left(\xi|\overline{a}_{1}\leq\xi_{1}\leq\overline{b}_{1}, \cdots,\overline{a}_{m}\leq\xi_{m}\leq\overline{b}_{m}\right) please set lower (left) truncation points \overline{a} and upper (right) truncation points \overline{b} via tr_left and tr_right arguments correspondingly. Note that if lower truncation points are negative infinite and upper truncation points are positive infinite then dtrhpa, itrhpa and etrhpa are similar to dhpa, ihpa and ehpa correspondingly.

In order to calculate Jacobian of f_{\xi }(x;\mu, \sigma, \alpha) and \bar{F}_{\xi}(\underline{x},\bar{x};\mu, \sigma, \alpha) w.r.t all ore some particular parameters please apply dhpaDiff and ihpaDiff functions correspondingly specifying parameters of interest via type argument. If x or x_lower and x_upper are single row matrices then gradients will be calculated.

For further information please see 'Examples' section. Note that examples are given separately for each function.

If given_ind and (or) omit_ind are numeric vectors then they are insensitive to the order of elements. For example given_ind = c(5, 2, 3) is similar to given_ind = c(2, 3, 5).

Densities Hermite polynomial approximation approach has been proposed by A. Gallant and D. W. Nychka in 1987. The main idea is to approximate unknown distribution density with scaled Hermite polynomial. For more information please refer to the literature listed below.

Value

Functions dhpa, phpa and dtrhpa return vector of probabilities of length nrow(x).

Functions ihpa and itrhpa return vector of probabilities of length nrow(x_upper).

If x argument has not been provided or is a single row matrix then function ehpa returns moment value. Otherwise it returns vector of length nrow(x) containing moments values.

If tr_left and tr_right arguments are single row matrices then function etrhpa returns moment value. Otherwise it returns vector of length max(nrow(tr_left), nrow(tr_right)) containing moments values.

Functions dhpaDiff and ihpaDiff return Jacobin matrix. The number of columns depends on type argument. The number of rows is nrow(x) for dhpaDiff and nrow(x_upper) for ihpaDiff

If mean or sd are not specified they assume the default values of m-dimensional vectors of 0 and 1, respectively. If x_lower is not specified then it is the matrix of the same size as x_upper containing negative infinity values only. If expectation_powers is not specified then it is m-dimensional vector of 0 values.

Please see 'Details' section for additional information.

References

A. Gallant and D. W. Nychka (1987) <doi:10.2307/1913241>

Examples

## Example demonstrating dhpa function application.
## Let's approximate some three random variables (i.e. X1, X2 and X3) 
## joint density function at points  x = (0,1, 0.2, 0.3) and 
## y = (0.5, 0.8, 0.6) with Hermite polynomial of (1, 2, 3) degrees which 
## polynomial coefficients equal 1 except coefficient related  to x1*(x^3) 
## polynomial element which equals 2. Also suppose that normal density 
## related mean vector equals (1.1, 1.2, 1.3) while standard deviations 
## vector is (2.1, 2.2, 2.3).

# Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow = 1)   # x point as a single row matrix
y <- matrix(c(0.5, 0.8, 0.6), nrow = 1)   # y point as a single row matrix
x_y <- rbind(x, y)                        # matrix which rows are x and y
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial 
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
   row.names = c("x1 power", "x2 power", "x3 power", "coefficients"),
   optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate density approximation 
   # at point x (note that x should be a matrix)
dhpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd)
   # at points x and y
dhpa(x = x_y,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd)

# Condition second component to be 0.5 i.e. X2 = 0.5.
# Substitute x and y second components with conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1) # or simply x[2] <- 0.5
y <- matrix(c(0.4, 0.5, 0.6), nrow = 1) # or simply y[2] <- 0.5
x_y <- rbind(x, y) 
# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on X2 = 0.5) density approximation 
   # at point x
dhpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind)
   # at points x and y
dhpa(x = x_y,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind)	
	
# Consider third component marginal distribution conditioned on the
# second component 0.5 value i.e. (X3 | X2 = 0.5).
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on x2 = 0.5) marginal (for x3) density approximation
   # at point x
dhpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind, 
     omit_ind = omit_ind)
   # at points x and y
dhpa(x = x_y,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind, 
     omit_ind = omit_ind)  
        
## Example demonstrating phpa function application.
## Let's approximate some three random variables (X1, X2, X3) 
## joint cumulative distribution function (cdf) at point (0,1, 0.2, 0.3)
## with Hermite polynomial of (1, 2, 3) degrees which polynomial 
## coefficients equal 1 except coefficient related to x1*(x^3) polynomial 
## element which equals 2. Also suppose that normal density related
## mean vector equals (1.1, 1.2, 1.3) while standard deviations
## vector is (2.1, 2.2, 2.3).

## Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)

# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
	           row.names = c("x1 power", "x2 power", 
	                         "x3 power", "coefficients"),
          	 optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate cdf approximation at point x
phpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd)
	
# Condition second component to be 0.5
# Substitute x second component with conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1) # or simply x[2] <- 0.5

# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on X2 = 0.5) cdf approximation at point x
phpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind)
	
# Consider third component marginal distribution
# conditioned on the second component 0.5 value

# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) cdf 
# approximation at point x
phpa(x = x,
      pol_coefficients = pol_coefficients, 
      pol_degrees = pol_degrees,
      mean = mean, sd = sd,
      given_ind = given_ind, 
      omit_ind = omit_ind)

## Example demonstrating ihpa function application.
## Let's approximate some three random variables (X1, X2, X3) joint interval 
## distribution function (intdf) at lower and upper points (0,1, 0.2, 0.3) 
## and (0,4, 0.5, 0.6) correspondingly with Hermite polynomial of (1, 2, 3) 
## degrees which polynomial coefficients equal 1 except coefficient related 
## to x1*(x^3) polynomial element which equals 2. Also suppose that normal
## density related mean vector equals (1.1, 1.2, 1.3) while standard
## deviations vector is (2.1, 2.2, 2.3).

## Prepare initial values
x_lower <- matrix(c(0.1, 0.2, 0.3), nrow=1)
x_upper <- matrix(c(0.4, 0.5, 0.6), nrow=1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)

# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial 
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
	           row.names = c("x1 power", "x2 power", 
	                         "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate intdf approximation at points x_lower and x_upper
ihpa(x_lower = x_lower, x_upper = x_upper, 
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd)
	
# Condition second component to be 0.7
# Substitute x second component with conditional value 0.7
x_upper <- matrix(c(0.4, 0.7, 0.6), nrow = 1)

# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on X2 = 0.5) intdf approximation 
# at points x_lower and x_upper
ihpa(x_lower = x_lower, x_upper = x_upper,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind)
	
# Consider third component marginal distribution
# conditioned on the second component 0.7 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) 
# intdf approximation at points x_lower and x_upper
ihpa(x_lower = x_lower, x_upper = x_upper,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = given_ind, omit_ind = omit_ind)
	
## Example demonstrating ehpa function application.
## Let's approximate some three random variables (X1, X2, X3) powered product 
## expectation for powers (3, 2, 1) with Hermite polynomial of (1, 2, 3) 
## degrees which polynomial coefficients equal 1 except coefficient 
## related to x1*(x^3) polynomial element which equals 2.
## Also suppose that normal density related mean vector equals 
## (1.1, 1.2, 1.3) while standard deviations vector is (2.1, 2.2, 2.3).

# Prepare initial values
expectation_powers = c(3,2,1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)

# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

#Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
              row.names = c("x1 power", "x2 power", 
                            "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate expected powered product approximation
ehpa(pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd, 
     expectation_powers = expectation_powers)
	
# Condition second component to be 0.5
# Substitute x second component with conditional value 0.5
x <- matrix(c(NA, 0.5, NA), nrow = 1)
#Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on X2 = 0.5) expected powered product approximation
ehpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd, 
     expectation_powers = expectation_powers,
     given_ind = given_ind)
	
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) expected powered 
# product approximation at points x_lower and x_upper
ehpa(x = x,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd, 
     expectation_powers = expectation_powers,
     given_ind = given_ind, 
     omit_ind = omit_ind)
	
## Example demonstrating etrhpa function application.
## Let's approximate some three truncated random variables (X1, X2, X3) 
## powered product expectation for powers (3, 2, 1) with Hermite polynomial 
## of (1,2,3) degrees which polynomial coefficients equal 1 except 
## coefficient related to x1*(x^3) polynomial element which equals 2. Also
## suppose that normal density related mean vector equals (1.1, 1.2, 1.3) 
## while standard deviations vector is (2.1, 2.2, 2.3). Suppose that lower  
## and upper truncation points are (-1.1,-1.2,-1.3) and (1.1,1.2,1.3) 
## correspondingly.

# Prepare initial values
expectation_powers = c(3,2,1)
tr_left = matrix(c(-1.1,-1.2,-1.3), nrow = 1)
tr_right = matrix(c(1.1,1.2,1.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
              row.names = c("x1 power", "x2 power", 
                            "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate expected powered product approximation for truncated distribution
etrhpa(pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd, 
       expectation_powers = expectation_powers,
       tr_left = tr_left, tr_right = tr_right)
       
## Example demonstrating dtrhpa function application.
## Let's approximate some three random variables (X1, X2, X3) joint density 
## function at point (0,1, 0.2, 0.3) with Hermite polynomial of (1,2,3)  
## degrees which polynomial coefficients equal 1 except coefficient related 
## to x1*(x^3) polynomial element which equals 2. Also suppose that normal 
## density related mean vector equals (1.1, 1.2, 1.3) while standard 
## deviations vector is (2.1, 2.2, 2.3). Suppose that lower and upper 
## truncation points are (-1.1,-1.2,-1.3) and (1.1,1.2,1.3) correspondingly.

# Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow=1)
tr_left = matrix(c(-1.1,-1.2,-1.3), nrow = 1)
tr_right = matrix(c(1.1,1.2,1.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)

# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
              row.names = c("x1 power", "x2 power", 
                            "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate density approximation at point x
dtrhpa(x = x,
       pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd,
       tr_left = tr_left, 
       tr_right = tr_right)
	
# Condition second component to be 0.5
# Substitute x second component with conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1)
# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on x2 = 0.5) density approximation at point x
dtrhpa(x = x,
       pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd,
       given_ind = given_ind,
       tr_left = tr_left, tr_right = tr_right)
	
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) 
# density approximation at point x
dtrhpa(x = x,
       pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd,
       given_ind = given_ind, omit_ind = omit_ind,
       tr_left = tr_left, tr_right = tr_right)
       
## Example demonstrating itrhpa function application.
## Let's approximate some three truncated random variables (X1, X2, X3) joint 
## interval distribution function at lower and upper points (0,1, 0.2, 0.3) 
## and (0,4, 0.5, 0.6) correspondingly with Hermite polynomial of (1 ,2, 3) 
## degrees which polynomial coefficients equal 1 except coefficient
## related to x1*(x^3) polynomial element which equals 2. Also suppose 
## that normal density related mean vector equals (1.1, 1.2, 1.3) while 
## standard deviations vector is (2.1, 2.2, 2.3). Suppose that lower and 
## upper truncation are (-1.1,-1.2,-1.3) and (1.1,1.2,1.3) correspondingly.

# Prepare initial values
x_lower <- matrix(c(0.1, 0.2, 0.3), nrow=1)
x_upper <- matrix(c(0.4, 0.5, 0.6), nrow=1)
tr_left = matrix(c(-1.1,-1.2,-1.3), nrow = 1)
tr_right = matrix(c(1.1,1.2,1.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial 
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
              row.names = c("x1 power", "x2 power", 
                            "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate intdf approximation at points x_lower and x_upper
itrhpa(x_lower = x_lower, x_upper = x_upper, 
       pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd,
       tr_left = tr_left, tr_right = tr_right)
    
# Condition second component to be 0.7
# Substitute x second component with conditional value 0.7
x_upper <- matrix(c(0.4, 0.7, 0.6), nrow = 1)
# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on X2 = 0.5) intdf 
# approximation at points x_lower and x_upper
itrhpa(x_lower = x_lower, x_upper = x_upper,
       pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd,
       given_ind = given_ind,
       tr_left = tr_left, tr_right = tr_right)
    
# Consider third component marginal distribution
# conditioned on the second component 0.7 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) intdf 
# approximation at points x_lower and x_upper
itrhpa(x_lower = x_lower, x_upper = x_upper,
       pol_coefficients = pol_coefficients, 
       pol_degrees = pol_degrees,
       mean = mean, sd = sd,
       given_ind = given_ind, omit_ind = omit_ind,
       tr_left = tr_left, tr_right = tr_right)
       
## Example demonstrating dhpaDiff function application.
## Let's approximate some three random variables (X1, X2, X3) joint density
## function at point (0,1, 0.2, 0.3) with Hermite polynomial of (1,2,3)
## degrees which polynomial coefficients equal 1 except coefficient related
## to x1*(x^3) polynomial element which equals 2. Also suppose that normal
## density related mean vector equals (1.1, 1.2, 1.3) while standard
## deviations vector is (2.1, 2.2, 2.3). In this example let's calculate
## density approximating function's gradient respect to various parameters

# Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
              row.names = c("x1 power", "x2 power", 
                            "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate density approximation gradient 
# respect to polynomial coefficients at point x
dhpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd)
	
# Condition second component to be 0.5
# Substitute x second component with conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1)
# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on x2 = 0.5) density approximation's 
# gradient respect to polynomial coefficients at point x
dhpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind)
	
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) density 
# approximation's gradient respect to: 
  # polynomial coefficients
dhpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, 
         omit_ind = omit_ind)
  # mean
dhpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, 
         omit_ind = omit_ind,
         type = "mean")
  # sd
dhpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, 
         omit_ind = omit_ind,
         type = "sd")
 # x
dhpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, 
         omit_ind = omit_ind,
         type = "x")
         
## Example demonstrating ehpaDiff function application.
## Let's approximate some three random variables (X1, X2, X3) expectation
## of the form E((X1 ^ 3) * (x2 ^ 1) * (X3 ^ 2)) and calculate the gradient

# Distribution parameters
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
pol_coefficients_n <- prod(pol_degrees + 1)
pol_coefficients <- rep(1, pol_coefficients_n)

# Set powers for expectation
expectation_powers <- c(3, 1, 2)

# Calculate expectation approximation gradient 
# respect to all parameters
ehpaDiff(pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         expectation_powers = expectation_powers,
         type = "all")

# Let's calculate gradient of E(X1 ^ 3 | (X2 = 1, X3 = 2))
x <- c(0, 1, 2)                  # x[1] may be arbitrary (not NA) values
expectation_powers <- c(3, 0, 0) # expectation_powers[2:3] may be 
                                 # arbitrary (not NA) values
given_ind <- c(2, 3)
ehpaDiff(x = x,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind,
         expectation_powers = expectation_powers,
         type = "all")
## Example demonstrating ihpaDiff function application.
## Let's approximate some three random variables (X1, X2, X3 ) joint interval 
## distribution function (intdf) at lower and upper points (0,1, 0.2, 0.3) 
## and (0,4, 0.5, 0.6) correspondingly with Hermite polynomial of (1, 2, 3) 
## degrees which polynomial coefficients equal 1 except coefficient 
## related to x1*(x^3) polynomial element which equals 2.
## Also suppose that normal density related mean vector equals 
## (1.1, 1.2, 1.3) while standard deviations vector is (2.1, 2.2, 2.3).
## In this example let's calculate interval distribution approximating 
## function gradient respect to polynomial coefficients.

# Prepare initial values
x_lower <- matrix(c(0.1, 0.2, 0.3), nrow=1)
x_upper <- matrix(c(0.4, 0.5, 0.6), nrow=1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)

# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)

# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)

# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2

# Visualize correspondence between polynomial 
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
              row.names = c("x1 power", "x2 power", 
                            "x3 power", "coefficients"),
              optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)

# Calculate intdf approximation gradient respect to 
# polynomial coefficients at points x_lower and x_upper
ihpaDiff(x_lower = x_lower, x_upper = x_upper, 
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd)
	
# Condition second component to be 0.7
# Substitute x second component with conditional value 0.7
x_upper <- matrix(c(0.4, 0.7, 0.6), nrow = 1)

# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)

# Calculate conditional (on X2 = 0.5) intdf approximation
# respect to polynomial coefficients at points x_lower and x_upper
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind)
	
# Consider third component marginal distribution
# conditioned on the second component 0.7 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)

# Calculate conditional (on X2 = 0.5) marginal (for X3) intdf approximation
# respect to:
  # polynomial coefficients
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, omit_ind = omit_ind)
  # mean
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, omit_ind = omit_ind,
         type = "mean")
  # sd
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, omit_ind = omit_ind,
         type = "sd")
  # x_lower
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
         pol_coefficients = pol_coefficients, 
         pol_degrees = pol_degrees,
         mean = mean, sd = sd,
         given_ind = given_ind, omit_ind = omit_ind,
         type = "x_lower")
  # x_upper
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
          pol_coefficients = pol_coefficients, 
          pol_degrees = pol_degrees,
          mean = mean, sd = sd,
          given_ind = given_ind, omit_ind = omit_ind,
          type = "x_upper")
          
## Examples demonstrating qhpa function application.

## Sub-example 1 - univariate distribution
## Consider random variable X

# Distribution parameters
mean <- 1
sd <- 2
pol_degrees <- 2
pol_coefficients <- c(1, 0.1, -0.01)

# The level of quantile
p <- 0.7

# Calculate quantile of X
qhpa(p = p,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd)
     
## Sub-example 2 - marginal distribution
## Consider random vector (X1, X2) and quantile of X1

# Distribution parameters
mean <- c(1, 1.2)
sd <- c(2, 3)
pol_degrees <- c(2, 2)
pol_coefficients <- c(1, 0.1, -0.01, 0.2, 0.012, 
                      0.0013, 0.0042, 0.00025, 0)

# The level of quantile
p <- 0.7

# Calculate quantile of X1
qhpa(p = p,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     omit_ind = 2)                          # set omitted variable index
     
## Sub-example 3 - marginal and conditional distribution
## Consider random vector (X1, X2, X3) and 

## quantiles of X1|X3 and X1|(X2,X3)
mean <- c(1, 1.2, 0.9)
sd <- c(2, 3, 2.5)
pol_degrees <- c(1, 1, 1)
pol_coefficients <- c(1, 0.1, -0.01, 0.2, 0.012, 
                      0.0013, 0.0042, 0.00025)

# The level of quantile
p <- 0.7

# Calculate quantile of X1|X3 = 0.2
qhpa(p = p,
     x = matrix(c(NA, NA, 0.2), nrow = 1),  # set any values to
                                            # unconditioned and 
                                            # omitted components
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     omit_ind = 2,                          # set omitted variable index
     given_ind = 3)                         # set conditioned variable index
     
# Calculate quantile of X1|(X2 = 0.5, X3 = 0.2)
qhpa(p = p,
     x = matrix(c(NA, 0.5, 0.2), nrow = 1), # set any values to 
                                            # unconditioned and 
                                            # omitted components
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd,
     given_ind = c(2, 3))                   # set conditioned
                                            # variables indexes
        
## Examples demonstrating rhpa function application.

# Set seed for reproducibility
set.seed(123)

# Distribution parameters
mean <- 1
sd <- 2
pol_degrees <- 2
pol_coefficients <- c(1, 0.1, -0.01)

# Simulate two observations from this distribution
rhpa(n = 2,
     pol_coefficients = pol_coefficients, 
     pol_degrees = pol_degrees,
     mean = mean, sd = sd)
        

hpa documentation built on May 31, 2023, 8:25 p.m.

Related to hpaDist in hpa...