| hpaDist | R Documentation |
Approximation of truncated, marginal and conditional densities, moments and cumulative probabilities of multivariate distributions via a Hermite polynomial based approach proposed by Gallant and Nychka in 1987.
Density approximating function is scale-adjusted product of two terms.
The first one is squared multivariate polynomial of degree pol_degrees
with a vector of coefficients pol_coefficients.
The second is product of independent normal random variables' densities with
expected values and standard deviations given by mean and sd
vectors, respectively. Approximating function satisfies properties of
density function thus generating a broad family of distributions.
Characteristics of these distributions
(moments, quantiles, probabilities and so on)
may provide accurate approximations to the characteristic of other
distributions. Moreover it is usually possible to provide arbitrary close
approximation by means of polynomial degrees increasing.
dhpa(
x,
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
phpa(
x,
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
ihpa(
x_lower = numeric(0),
x_upper = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
ehpa(
x = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
expectation_powers = numeric(0),
is_parallel = FALSE,
is_validation = TRUE
)
etrhpa(
tr_left = numeric(0),
tr_right = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
mean = numeric(0),
sd = numeric(0),
expectation_powers = numeric(0),
is_parallel = FALSE,
is_validation = TRUE
)
dtrhpa(
x,
tr_left = numeric(0),
tr_right = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
itrhpa(
x_lower = numeric(0),
x_upper = numeric(0),
tr_left = numeric(0),
tr_right = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
dhpaDiff(
x,
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
type = "pol_coefficients",
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
ehpaDiff(
x = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
expectation_powers = numeric(0),
type = "pol_coefficients",
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
ihpaDiff(
x_lower = numeric(0),
x_upper = numeric(0),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0),
type = "pol_coefficients",
is_parallel = FALSE,
log = FALSE,
is_validation = TRUE
)
qhpa(
p,
x = matrix(1, 1),
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
given_ind = numeric(0),
omit_ind = numeric(0),
mean = numeric(0),
sd = numeric(0)
)
rhpa(
n,
pol_coefficients = numeric(0),
pol_degrees = numeric(0),
mean = numeric(0),
sd = numeric(0)
)
x |
numeric matrix of function arguments and
conditional values. Note that the rows of |
pol_coefficients |
numeric vector of polynomial coefficients. |
pol_degrees |
a non-negative integer vector of polynomial degrees (orders). |
given_ind |
logical or numeric vector indicating whether corresponding
random vector component is conditioned. By default it is a logical
vector of |
omit_ind |
logical or numeric vector indicating whether the
corresponding random vector component is omitted. By default it is a
logical vector of |
mean |
numeric vector of expected values. |
sd |
positive numeric vector of standard deviations. |
is_parallel |
if |
log |
logical; if |
is_validation |
logical value indicating whether function input
arguments should be validated. Set it to |
x_lower |
numeric matrix of lower integration limits.
Note that the rows of |
x_upper |
numeric matrix of upper integration limits.
Note that the rows of |
expectation_powers |
integer vector of random vector components' powers. |
tr_left |
numeric matrix of left (lower) truncation limits.
Note that the rows of |
tr_right |
numeric matrix of right (upper) truncation limits.
Note that the rows of |
type |
determines the partial derivatives to be included in the
gradient. If |
p |
numeric vector of probabilities |
n |
positive integer representing the number of observations |
It is possible to approximate densities
dhpa, cumulative probabilities
phpa, interval probabilities ihpa,
moments ehpa as well as their truncated
dtrhpa, itrhpa,
etrhpa forms
and gradients dhpaDiff, ihpaDiff.
Note that phpa is a special case of
ihpa where x
corresponds to x_upper while x_lower is a matrix of
negative infinity values. So phpa is intended to
approximate cumulative
distribution functions while ihpa approximates
probabilities that
random vector components will be between values determined by rows of
x_lower and x_upper matrices. Further details are given below.
Since density approximating function is non-negative and integrates
to 1, it is a density function for some m-variate
random vector \xi. Approximating function f_{\xi }(x)
has the following form:
f_{\xi }(x) = f_{\xi }(x;\mu, \sigma, \alpha) =
\frac{1}{\psi }\prod\limits_{t=1}^{m}\phi
({x}_{t};{\mu }_{t},{\sigma }_{t}){{\left( \sum\limits_{{i}_{1}=0}^{{K}_{1}}
{...}\sum\limits_{{i}_{m}=0}^{{K}_{m}}{{{\alpha }_{({{i}_{1}},...,{{i}_{m}})
}}\prod\limits_{r=1}^{m}x_{r}^{{{i}_{r}}}} \right)}^{2}}
\psi =\sum\limits_{{i}_{1}=0}^{{K}_{1}}{...}\sum
\limits_{{i}_{m}=0}^{{K}_{m}}{\sum\limits_{{j}_{1}=0}^{{K}_{1}}
{...}\sum\limits_{{j}_{m}=0}^{{K}_{m}}{{{\alpha }_{({i}_{1},
\cdots,{i}_{m})}}{{\alpha }_{({j}_{1},\cdots,{j}_{m})}}\prod
\limits_{r=1}^{m}\mathcal{M}({i}_{r}+{j}_{r};{{\mu }_{r}},{\sigma }_{r})}},
where:
x = (x_{1}, ..., x_{m}) - is vector of arguments i.e. rows
of x matrix in dhpa.
{\alpha }_{({i}_{1},\cdots,{i}_{m})} - is polynomial coefficient
corresponding to pol_coefficients[k] element. In order to investigate
correspondence between k and ({i}_{1},\cdots,{i}_{m}) values,
please see 'Examples' section below or polynomialIndex
function 'Details', 'Value' and 'Examples' sections. Note that if m=1
then pol_coefficients[k] simply corresponds to \alpha_{k-1}.
(K_{1},...,K_{m}) - are polynomial degrees (orders) provided via
pol_degrees argument so pol_degrees[i] determines K_{i}.
\phi
(.;{\mu }_{t},{\sigma }_{t}) - is a normal random variable density function
where \mu_{t} and \sigma_{t} are mean and standard deviation
determined by mean[t] and sd[t] arguments values.
\mathcal{M}(q;{{\mu }_{r}},{\sigma }_{r}) - is q-th order
moment of a normal random variable with mean {\mu }_{r} and standard
deviation {\sigma }_{r}. Note that function
normalMoment allows to calculate and differentiate normal
random variable's moments.
\psi - constant term insuring that f_{\xi }(x) is a
density function.
Therefore dhpa allows to calculate f_{\xi}(x)
values at points
determined by rows of x matrix given polynomial
degrees pol_degrees (K) as well as mean (\mu),
sd (\sigma) and pol_coefficients (\alpha)
parameter values. Note that mean, sd and pol_degrees are
m-variate vectors while pol_coefficients has
prod(pol_degrees + 1) elements.
Cumulative probabilities could be approximated as follows:
P\left(\underline{x}_{1}\leq\xi_{1}\leq\overline{x}_{1},...,
\underline{x}_{m}\leq\xi_{m}\leq\overline{x}_{m}\right) =
= \bar{F}_{\xi}(\underline{x},\bar{x}) =
\bar{F}_{\xi}(\underline{x},\bar{x};\mu, \sigma, \alpha) =
\frac{1}{\psi }
\prod\limits_{t=1}^{m}(\Phi ({{{\bar{x}}}_{t}};{{\mu }_{t}},
{{\sigma }_{t}})-\Phi ({{{\underline{x}}}_{t}};{{\mu }_{t}},
{{\sigma }_{t}})) *
* \sum\limits_{{{i}_{1}}=0}^{{{K}_{1}}}{...}
\sum\limits_{{{i}_{m}}=0}^{{{K}_{m}}}{\sum\limits_{{{j}_{1}}=0}^{{{K}_{1}}}
{...}\sum\limits_{{{j}_{m}}=0}^{{{K}_{m}}}
{{{\alpha }_{({{i}_{1}},...,{{i}_{m}})}}{{\alpha }_{({{j}_{1}},...,{{j}_{m}})
}}}}\prod\limits_{r=1}^{m}\mathcal{M}_{TR}\left({i}_{r}+{j}_{r};
\underline{x}_{r},\overline{x}_{r},\mu_{r},\sigma_{r}\right)
where:
\Phi
(.;{\mu }_{t},{\sigma }_{t}) - is normal random variable's cumulative
distribution function where \mu_{t} and \sigma_{t} are mean and
standard deviation determined by mean[t] and sd[t] arguments
values.
\mathcal{M}_{TR}(q;
\underline{x}_{r},\overline{x}_{r},\mu_{r},\sigma_{r}) - is
q-th order
moment of a truncated (from above by \overline{x}_{r} and from below by
\underline{x}_{r})
normal random variable with mean {\mu }_{r} and standard
deviation {\sigma }_{r}. Note that function
truncatedNormalMoment allows to calculate and
differentiate truncated normal random variable's moments.
\overline{x} = (\overline{x}_{1},...,\overline{x}_{m}) -
vector of upper integration limits,
i.e., rows of the x_upper matrix in ihpa.
\underline{x} = (\underline{x}_{1},...,\underline{x}_{m}) -
vector of lower integration limits,
i.e., rows of the x_lower matrix in ihpa.
Therefore ihpa allows to calculate interval distribution
function \bar{F}_{\xi}(\underline{x},\bar{x})
values at points determined by rows of x_lower (\underline{x})
and x_upper (\overline{x}) matrices.
The rest of the arguments are similar to dhpa.
Expected value powered product approximation is as follows:
E\left( \prod\limits_{t=1}^{m}\xi_{t}^{{{k}_{t}}} \right)=
\frac{1}{\psi }\sum\limits_{{{i}_{1}}=0}^{{{K}_{1}}}{...}
\sum\limits_{{{i}_{m}}=0}^{{{K}_{m}}}
{\sum\limits_{{{j}_{1}}=0}^{{{K}_{1}}}{...}
\sum\limits_{{{j}_{m}}=0}^{{{K}_{m}}}
{{{\alpha }_{({{i}_{1}},...,{{i}_{m}})}}
{{\alpha }_{({{j}_{1}},...,{{j}_{m}})}}}}
\prod\limits_{r=1}^{m}\mathcal{M}({{i}_{r}}+{{j}_{r}}+{{k}_{t}};
{{\mu }_{r}},{{\sigma }_{r}})
where (k_{1},...,k_{m}) are integer powers determined by
expectation_powers argument of ehpa so
expectation_powers[t] assigns k_{t}. Note that argument x
in ehpa allows to determine conditional values.
Expanding polynomial degrees (K_{1},...,K_{m}) it is possible to
provide arbitrarily close approximation to a density of some m-variate
random vector \xi^{\star}. So actually f_{\xi}(x)
approximates f_{\xi^{\star}}(x). Accurate approximation requires
selection of appropriate mean, sd and pol_coefficients
values. In order to get sample estimates of these parameters, apply
hpaML function.
In order to perform calculation for marginal distribution of some
\xi components, provide omitted
components via omit_ind argument.
For examples if one assumes m=5-variate distribution
and wants to deal with 1-st, 3-rd, and 5-th components
only i.e. (\xi_{1},\xi_{3},\xi_{5}) then set
omit_ind = c(FALSE, TRUE, FALSE, TRUE, FALSE)
indicating that \xi_{2} and \xi_{4} should be 'omitted' from
\xi since 2-nd and 4-th values of omit_ind are
TRUE.
Then x still should be a 5 column matrix but
values in 2-nd and 4-th columns will not affect
calculation results. Meanwhile note that marginal distribution of t
components of \xi usually does not coincide with any marginal
distribution generated by t-variate density approximating function.
In order to perform calculation for conditional distribution, i.e., given
fixed values for some \xi components, provide these
components via given_ind argument.
For example, if one assumes m=5-variate distribution
and wants to deal with 1-st, 3-rd, and 5-th components
given fixed values (suppose 8 and 10) for the other two components, i.e.,
(\xi|\xi_{2} = 8, \xi_{4} = 10), then set
given_ind = c(FALSE, TRUE, FALSE, TRUE, FALSE) and
x[2] = 8, x[4] = 10 where for simplicity it is assumed that
x is a single row 5 column matrix; it is possible to provide
different conditional values for the same components by setting different
values to different x rows.
Note that it is possible to combine given_ind and omit_ind
arguments. However, it is wrong to set both given_ind[i] and
omit_ind[i] to TRUE. Also at least one value should be
FALSE for both given_ind and omit_ind.
In order to consider truncated distribution of \xi, i.e.,
\left(\xi|\overline{a}_{1}\leq\xi_{1}\leq\overline{b}_{1},
\cdots,\overline{a}_{m}\leq\xi_{m}\leq\overline{b}_{m}\right)
please set lower (left) truncation points \overline{a} and
upper (right) truncation points \overline{b} via tr_left
and tr_right arguments, respectively. Note that if lower truncation
points are negative infinity and upper truncation points are positive
infinity, then dtrhpa, itrhpa and
etrhpa are similar to dhpa,
ihpa and ehpa correspondingly.
In order to calculate Jacobian of f_{\xi }(x;\mu, \sigma, \alpha)
and \bar{F}_{\xi}(\underline{x},\bar{x};\mu, \sigma, \alpha) w.r.t
all or some particular parameters, apply dhpaDiff
and ihpaDiff functions, respectively, specifying
parameters of interest via type argument. If x or
x_lower and x_upper are single row matrices then gradients
will be calculated.
For further information please see 'Examples' section. Note that examples are given separately for each function.
If given_ind and/or omit_ind are numeric vectors
then they are insensitive to the order of elements.
For example given_ind = c(5, 2, 3) is similar
to given_ind = c(2, 3, 5).
The Hermite polynomial approximation approach for densities has been proposed by A. Gallant and D. W. Nychka in 1987. The main idea is to approximate unknown distribution density with a scaled Hermite polynomial. For more information please refer to the literature listed below.
Functions dhpa, phpa and
dtrhpa return vector of probabilities of length
nrow(x).
Functions ihpa and
itrhpa return vector of probabilities of length
nrow(x_upper).
If x argument has not been provided or is a single row
matrix then function
ehpa returns a moment value. Otherwise it returns a
vector of length nrow(x) containing moment values.
If tr_left and tr_right arguments are single row matrices then
function etrhpa returns a moment value.
Otherwise it returns vector of length
max(nrow(tr_left), nrow(tr_right)) containing moments values.
Functions dhpaDiff and ihpaDiff
return Jacobian matrix. The number
of columns depends on the type argument. The number of rows is
nrow(x) for dhpaDiff and
nrow(x_upper) for
ihpaDiff
If mean or sd are not specified, they default to the default
values of m-dimensional vectors of 0 and 1, respectively.
If x_lower is not specified then it is a matrix of the
same size as x_upper containing negative infinity values only. If
expectation_powers is not specified then it is an m-dimensional
vector of 0 values.
Please see 'Details' section for additional information.
A. Gallant and D. W. Nychka (1987) <doi:10.2307/1913241>
## Example demonstrating dhpa function application.
## Let's approximate three random variables (i.e. X1, X2 and X3)
## the joint density function at points x = (0,1, 0.2, 0.3) and
## y = (0.5, 0.8, 0.6) with Hermite polynomial of (1, 2, 3) degrees which
## polynomial coefficients equal 1 except the coefficient related to
## x1*(x3^2)
## polynomial element which equals 2. Also suppose that the normal density's
## related mean vector equals (1.1, 1.2, 1.3) while the standard deviation
## vector is (2.1, 2.2, 2.3).
# Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow = 1) # x point as a single row matrix
y <- matrix(c(0.5, 0.8, 0.6), nrow = 1) # y point as a single row matrix
x_y <- rbind(x, y) # matrix whose rows are x and y
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power", "x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate density approximation
# at point x (note that x should be a matrix)
dhpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
# at points x and y
dhpa(x = x_y,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
# Condition second component to be 0.5 i.e. X2 = 0.5.
# Substitute x and y second components with conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1) # or simply x[2] <- 0.5
y <- matrix(c(0.4, 0.5, 0.6), nrow = 1) # or simply y[2] <- 0.5
x_y <- rbind(x, y)
# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on X2 = 0.5) density approximation
# at point x
dhpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind)
# at points x and y
dhpa(x = x_y,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind)
# Consider third component marginal distribution conditioned on the
# second component 0.5 value i.e. (X3 | X2 = 0.5).
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on x2 = 0.5) marginal (for x3) density approximation
# at point x
dhpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind)
# at points x and y
dhpa(x = x_y,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind)
## Example demonstrating phpa function application.
## Let's approximate three random variables (X1, X2, X3) the
## joint cumulative distribution function (cdf) at point (0.1, 0.2, 0.3)
## with a Hermite polynomial of (1, 2, 3) degrees whose polynomial
## coefficients equal 1 except the coefficient related to x1*(x3^2)
## polynomial element which equals 2. Also suppose that the normal density's
## mean vector equals (1.1, 1.2, 1.3) while the standard deviation
## vector is (2.1, 2.2, 2.3).
## Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate cdf approximation at point x
phpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
# Condition second component to be 0.5
# Substitute x second component with the conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1) # or simply x[2] <- 0.5
# Set the second component to TRUE indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on X2 = 0.5) cdf approximation at point x
phpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind)
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set the first component to TRUE indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.5) marginal (for X3) cdf
# approximation at point x
phpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind)
## Example demonstrating ihpa function application.
## Let's approximate three random variables (X1, X2, X3) the joint interval
## distribution function (intdf) at lower and upper points (0.1, 0.2, 0.3)
## and (0,4, 0.5, 0.6), respectively, with a Hermite polynomial of (1, 2, 3)
## degrees whose polynomial coefficients equal 1 except the coefficient
## related to x1*(x3^2) polynomial element which equals 2. Also suppose that
## the normal density's mean vector equals (1.1, 1.2, 1.3) while the standard
## deviation vector is (2.1, 2.2, 2.3).
## Prepare initial values
x_lower <- matrix(c(0.1, 0.2, 0.3), nrow=1)
x_upper <- matrix(c(0.4, 0.5, 0.6), nrow=1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate intdf approximation at points x_lower and x_upper
ihpa(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
# Condition second component to be 0.7
# Substitute x second component with the conditional value 0.7
x_upper <- matrix(c(0.4, 0.7, 0.6), nrow = 1)
# Set the second component to TRUE indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on X2 = 0.7) the intdf approximation
# at points x_lower and x_upper
ihpa(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind)
# Consider third component marginal distribution
# conditioned on the second component 0.7 value
# Set the first component to TRUE indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.7) marginal (for X3)
# the intdf approximation at points x_lower and x_upper
ihpa(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind)
## Example demonstrating ehpa function application.
## Let's approximate some random variables (X1, X2, X3) the powered product
## expectation for powers (3, 2, 1) with Hermite polynomial of (1, 2, 3)
## degrees whose polynomial coefficients equal 1 except the coefficient
## related to x1*(x3^2) polynomial element which equals 2.
## Also suppose that the normal density's mean vector equals
## (1.1, 1.2, 1.3) while the standard deviation vector is (2.1, 2.2, 2.3).
# Prepare initial values
expectation_powers = c(3,2,1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate expected powered product approximation
ehpa(pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
expectation_powers = expectation_powers)
# Condition second component to be 0.5
# Substitute x second component with the conditional value 0.5
x <- matrix(c(NA, 0.5, NA), nrow = 1)
# Set the second component to TRUE indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on X2 = 0.5) expected powered product approximation
ehpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
expectation_powers = expectation_powers,
given_ind = given_ind)
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set the first component to TRUE indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.5) marginal (for X3) expected powered
# product approximation at points x_lower and x_upper
ehpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
expectation_powers = expectation_powers,
given_ind = given_ind,
omit_ind = omit_ind)
## Example demonstrating etrhpa function application.
## Let's approximate three truncated random variables (X1, X2, X3) the
## powered product expectation for powers (3, 2, 1) with a Hermite polynomial
## of (1,2,3) degrees whose polynomial coefficients equal 1 except the
## coefficient related to x1*(x3^2) polynomial element which equals 2. Also
## suppose that the normal density's mean vector equals (1.1, 1.2, 1.3)
## while the standard deviation vector is (2.1, 2.2, 2.3). Suppose that lower
## and upper truncation points are (-1.1,-1.2,-1.3) and (1.1,1.2,1.3),
## respectively.
# Prepare initial values
expectation_powers = c(3,2,1)
tr_left = matrix(c(-1.1,-1.2,-1.3), nrow = 1)
tr_right = matrix(c(1.1,1.2,1.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate expected powered product approximation for truncated distribution
etrhpa(pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
expectation_powers = expectation_powers,
tr_left = tr_left, tr_right = tr_right)
## Example demonstrating dtrhpa function application.
## Let's approximate three random variables (X1, X2, X3) the joint density
## function at point (0.1, 0.2, 0.3) with Hermite polynomial of (1,2,3)
## degrees whose polynomial coefficients equal 1 except the coefficient
## related to x1*(x3^2) polynomial element which equals 2. Also suppose that
## the normal density's mean vector equals (1.1, 1.2, 1.3) while the standard
## deviations vector is (2.1, 2.2, 2.3). Suppose that lower and upper
## truncation points are (-1.1,-1.2,-1.3) and (1.1,1.2,1.3), respectively.
# Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow=1)
tr_left = matrix(c(-1.1,-1.2,-1.3), nrow = 1)
tr_right = matrix(c(1.1,1.2,1.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate density approximation at point x
dtrhpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
tr_left = tr_left,
tr_right = tr_right)
# Condition second component to be 0.5
# Substitute x second component with the conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1)
# Set the second component to TRUE indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on x2 = 0.5) density approximation at point x
dtrhpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
tr_left = tr_left, tr_right = tr_right)
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set the first component to TRUE indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.5) marginal (for X3)
# density approximation at point x
dtrhpa(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind,
tr_left = tr_left, tr_right = tr_right)
## Example demonstrating itrhpa function application.
## Let's approximate three truncated random variables (X1, X2, X3) the joint
## interval distribution function at lower and upper points (0,1, 0.2, 0.3)
## and (0.4, 0.5, 0.6), respectively, with a Hermite polynomial of (1 ,2, 3)
## degrees whose polynomial coefficients equal 1 except the coefficient
## related to x1*(x3^2) polynomial element which equals 2. Also suppose
## that the normal density's mean vector equals (1.1, 1.2, 1.3) while
## the standard deviation vector is (2.1, 2.2, 2.3). Suppose that lower and
## upper truncation are (-1.1,-1.2,-1.3) and (1.1,1.2,1.3), respectively.
# Prepare initial values
x_lower <- matrix(c(0.1, 0.2, 0.3), nrow=1)
x_upper <- matrix(c(0.4, 0.5, 0.6), nrow=1)
tr_left = matrix(c(-1.1,-1.2,-1.3), nrow = 1)
tr_right = matrix(c(1.1,1.2,1.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate intdf approximation at points x_lower and x_upper
itrhpa(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
tr_left = tr_left, tr_right = tr_right)
# Condition second component to be 0.7
# Substitute x second component with the conditional value 0.7
x_upper <- matrix(c(0.4, 0.7, 0.6), nrow = 1)
# Set the second component to TRUE indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on X2 = 0.7) the intdf
# approximation at points x_lower and x_upper
itrhpa(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
tr_left = tr_left, tr_right = tr_right)
# Consider third component marginal distribution
# conditioned on the second component 0.7 value
# Set the first component to TRUE indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.7) marginal (for X3) the intdf
# approximation at points x_lower and x_upper
itrhpa(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind,
tr_left = tr_left, tr_right = tr_right)
## Example demonstrating dhpaDiff function application.
## Let's approximate three random variables (X1, X2, X3) the joint density
## function at point (0.1, 0.2, 0.3) with a Hermite polynomial of (1,2,3)
## degrees whose polynomial coefficients are 1 except the coefficient related
## to x1*(x3^2) polynomial element which equals 2. Also suppose that the
## normal density related mean vector equals (1.1, 1.2, 1.3) while the
## standard deviations vector is (2.1, 2.2, 2.3). In this example let's
## calculate the density approximating function's gradient with respect
## to various parameters
# Prepare initial values
x <- matrix(c(0.1, 0.2, 0.3), nrow = 1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate density approximation gradient with
# respect to the polynomial coefficients at point x
dhpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
# Condition second component to be 0.5
# Substitute x second component with the conditional value 0.5
x <- matrix(c(0.1, 0.5, 0.3), nrow = 1)
# Set TRUE to the second component indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on x2 = 0.5) density approximation's
# gradient with respect to polynomial coefficients at point x
dhpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind)
# Consider third component marginal distribution
# conditioned on the second component 0.5 value
# Set TRUE to the first component indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.5) marginal (for X3) density
# approximation's gradient with respect to:
# polynomial coefficients
dhpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind)
# mean
dhpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind,
type = "mean")
# sd
dhpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind,
type = "sd")
# x
dhpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
omit_ind = omit_ind,
type = "x")
## Example demonstrating ehpaDiff function application.
## Let's approximate three random variables (X1, X2, X3) the expectation
## of the form E((X1 ^ 3) * (X2 ^ 1) * (X3 ^ 2)) and calculate the gradient
# Distribution parameters
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
pol_coefficients_n <- prod(pol_degrees + 1)
pol_coefficients <- rep(1, pol_coefficients_n)
# Set powers for expectation
expectation_powers <- c(3, 1, 2)
# Calculate expectation approximation gradient
# with respect to all parameters
ehpaDiff(pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
expectation_powers = expectation_powers,
type = "all")
# Let's calculate gradient of E(X1 ^ 3 | (X2 = 1, X3 = 2))
x <- c(0, 1, 2) # x[1] may be arbitrary (not NA) values
expectation_powers <- c(3, 0, 0) # expectation_powers[2:3] may be
# arbitrary (not NA) values
given_ind <- c(2, 3)
ehpaDiff(x = x,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind,
expectation_powers = expectation_powers,
type = "all")
## Example demonstrating ihpaDiff function application.
## Let's approximate three random variables (X1, X2, X3 ) the joint interval
## distribution function (intdf) at lower and upper points (0,1, 0.2, 0.3)
## and (0.4, 0.5, 0.6), respectively, with a Hermite polynomial of (1, 2, 3)
## degrees whose polynomial coefficients equal 1 except the coefficient
## related to x1*(x3^2) polynomial element which equals 2.
## Also suppose that the normal density's mean vector equals
## (1.1, 1.2, 1.3) while the standard deviation vector is (2.1, 2.2, 2.3).
## In this example let's calculate the interval distribution approximating
## function gradient with respect to the polynomial coefficients.
# Prepare initial values
x_lower <- matrix(c(0.1, 0.2, 0.3), nrow=1)
x_upper <- matrix(c(0.4, 0.5, 0.6), nrow=1)
mean <- c(1.1, 1.2, 1.3)
sd <- c(2.1, 2.2, 2.3)
pol_degrees <- c(1, 2, 3)
# Create polynomial powers and indexes correspondence matrix
pol_ind <- polynomialIndex(pol_degrees)
# Set all polynomial coefficients to 1
pol_coefficients <- rep(1, ncol(pol_ind))
pol_degrees_n <- length(pol_degrees)
# Assign coefficient 2 to the polynomial element (x1 ^ 1)*(x2 ^ 0)*(x3 ^ 2)
pol_coefficients[apply(pol_ind, 2, function(x) all(x == c(1, 0, 2)))] <- 2
# Visualize correspondence between polynomial
# elements and their coefficients
as.data.frame(rbind(pol_ind, pol_coefficients),
row.names = c("x1 power", "x2 power",
"x3 power", "coefficients"),
optional = TRUE)
printPolynomial(pol_degrees, pol_coefficients)
# Calculate intdf approximation gradient with respect to the
# polynomial coefficients at points x_lower and x_upper
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
# Condition second component to be 0.7
# Substitute x second component with the conditional value 0.7
x_upper <- matrix(c(0.4, 0.7, 0.6), nrow = 1)
# Set the second component to TRUE indicating that it is conditioned
given_ind <- c(FALSE, TRUE, FALSE)
# Calculate conditional (on X2 = 0.7) the intdf approximation
# with respect to the polynomial coefficients at points x_lower and x_upper
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind)
# Consider third component marginal distribution
# conditioned on the second component 0.7 value
# Set the first component to TRUE indicating that it is omitted
omit_ind <- c(TRUE, FALSE, FALSE)
# Calculate conditional (on X2 = 0.7) marginal (for X3) the intdf
# approximation with respect to:
# polynomial coefficients
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind)
# mean
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind,
type = "mean")
# sd
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind,
type = "sd")
# x_lower
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind,
type = "x_lower")
# x_upper
ihpaDiff(x_lower = x_lower, x_upper = x_upper,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = given_ind, omit_ind = omit_ind,
type = "x_upper")
## Examples demonstrating qhpa function application.
## Sub-example 1 - univariate distribution
## Consider random variable X
# Distribution parameters
mean <- 1
sd <- 2
pol_degrees <- 2
pol_coefficients <- c(1, 0.1, -0.01)
# The level of quantile
p <- 0.7
# Calculate quantile of X
qhpa(p = p,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
## Sub-example 2 - marginal distribution
## Consider random vector (X1, X2) and the quantile of X1
# Distribution parameters
mean <- c(1, 1.2)
sd <- c(2, 3)
pol_degrees <- c(2, 2)
pol_coefficients <- c(1, 0.1, -0.01, 0.2, 0.012,
0.0013, 0.0042, 0.00025, 0)
# The level of quantile
p <- 0.7
# Calculate quantile of X1
qhpa(p = p,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
omit_ind = 2) # set the omitted variable index
## Sub-example 3 - marginal and conditional distribution
## Consider random vector (X1, X2, X3) and
## the quantiles of X1|X3 and X1|(X2,X3)
mean <- c(1, 1.2, 0.9)
sd <- c(2, 3, 2.5)
pol_degrees <- c(1, 1, 1)
pol_coefficients <- c(1, 0.1, -0.01, 0.2, 0.012,
0.0013, 0.0042, 0.00025)
# The level of quantile
p <- 0.7
# Calculate quantile of X1|X3 = 0.2
qhpa(p = p,
x = matrix(c(NA, NA, 0.2), nrow = 1), # set any values to
# the unconditioned and
# omitted components
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
omit_ind = 2, # set the omitted variable index
given_ind = 3) # set the conditioned variable index
# Calculate quantile of X1|(X2 = 0.5, X3 = 0.2)
qhpa(p = p,
x = matrix(c(NA, 0.5, 0.2), nrow = 1), # set any values to the
# unconditioned and
# omitted components
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd,
given_ind = c(2, 3)) # set the conditioned
# variables indexes
## Examples demonstrating rhpa function application.
# Set seed for reproducibility
set.seed(123)
# Distribution parameters
mean <- 1
sd <- 2
pol_degrees <- 2
pol_coefficients <- c(1, 0.1, -0.01)
# Simulate two observations from this distribution
rhpa(n = 2,
pol_coefficients = pol_coefficients,
pol_degrees = pol_degrees,
mean = mean, sd = sd)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.