grgkw | R Documentation |
Computes the gradient vector (vector of partial derivatives) of the negative log-likelihood function for the five-parameter Generalized Kumaraswamy (GKw) distribution. This provides the analytical gradient, often used for efficient optimization via maximum likelihood estimation.
grgkw(par, data)
par |
A numeric vector of length 5 containing the distribution parameters
in the order: |
data |
A numeric vector of observations. All values must be strictly between 0 and 1 (exclusive). |
The components of the gradient vector of the negative log-likelihood
(-\nabla \ell(\theta | \mathbf{x})
) are:
-\frac{\partial \ell}{\partial \alpha} = -\frac{n}{\alpha} - \sum_{i=1}^{n}\ln(x_i) +
\sum_{i=1}^{n}\left[x_i^{\alpha} \ln(x_i) \left(\frac{\beta-1}{v_i} -
\frac{(\gamma\lambda-1) \beta v_i^{\beta-1}}{w_i} +
\frac{\delta \lambda \beta v_i^{\beta-1} w_i^{\lambda-1}}{z_i}\right)\right]
-\frac{\partial \ell}{\partial \beta} = -\frac{n}{\beta} - \sum_{i=1}^{n}\ln(v_i) +
\sum_{i=1}^{n}\left[v_i^{\beta} \ln(v_i) \left(\frac{\gamma\lambda-1}{w_i} -
\frac{\delta \lambda w_i^{\lambda-1}}{z_i}\right)\right]
-\frac{\partial \ell}{\partial \gamma} = n[\psi(\gamma) - \psi(\gamma+\delta+1)] -
\lambda\sum_{i=1}^{n}\ln(w_i)
-\frac{\partial \ell}{\partial \delta} = n[\psi(\delta+1) - \psi(\gamma+\delta+1)] -
\sum_{i=1}^{n}\ln(z_i)
-\frac{\partial \ell}{\partial \lambda} = -\frac{n}{\lambda} -
\gamma\sum_{i=1}^{n}\ln(w_i) + \delta\sum_{i=1}^{n}\frac{w_i^{\lambda}\ln(w_i)}{z_i}
where:
v_i = 1 - x_i^{\alpha}
w_i = 1 - v_i^{\beta} = 1 - (1-x_i^{\alpha})^{\beta}
z_i = 1 - w_i^{\lambda} = 1 - [1-(1-x_i^{\alpha})^{\beta}]^{\lambda}
\psi(\cdot)
is the digamma function (digamma
).
Numerical stability is ensured through careful implementation, including checks for valid inputs and handling of intermediate calculations involving potentially small or large numbers, often leveraging the Armadillo C++ library for efficiency.
Returns a numeric vector of length 5 containing the partial derivatives
of the negative log-likelihood function -\ell(\theta | \mathbf{x})
with
respect to each parameter:
(-\partial \ell/\partial \alpha, -\partial \ell/\partial \beta, -\partial \ell/\partial \gamma, -\partial \ell/\partial \delta, -\partial \ell/\partial \lambda)
.
Returns a vector of NaN
if any parameter values are invalid according
to their constraints, or if any value in data
is not in the
interval (0, 1).
Lopes, J. E.
Cordeiro, G. M., & de Castro, M. (2011). A new family of generalized distributions. Journal of Statistical Computation and Simulation
Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2), 79-88.
llgkw
(negative log-likelihood),
hsgkw
(Hessian matrix),
dgkw
(density),
optim
,
grad
(for numerical gradient comparison),
digamma
# Generate sample data from a known GKw distribution
set.seed(123)
true_par <- c(alpha = 2, beta = 3, gamma = 1.0, delta = 0.5, lambda = 0.5)
sample_data <- rgkw(100, alpha = true_par[1], beta = true_par[2],
gamma = true_par[3], delta = true_par[4], lambda = true_par[5])
# --- Use in Optimization (e.g., with optim using analytical gradient) ---
start_par <- c(1.5, 2.5, 1.2, 0.3, 0.6) # Initial guess
# Optimization using analytical gradient
mle_result_gr <- stats::optim(par = start_par,
fn = llgkw, # Objective function (Negative LL)
gr = grgkw, # Gradient function
method = "BFGS", # Method using gradient
hessian = TRUE,
data = sample_data)
if (mle_result_gr$convergence == 0) {
print("Optimization with analytical gradient converged.")
mle_par_gr <- mle_result_gr$par
print("Estimated parameters:")
print(mle_par_gr)
} else {
warning("Optimization with analytical gradient failed!")
}
# --- Compare analytical gradient to numerical gradient ---
# Requires the 'numDeriv' package
if (requireNamespace("numDeriv", quietly = TRUE) && mle_result_gr$convergence == 0) {
cat("\nComparing Gradients at MLE estimates:\n")
# Numerical gradient of the negative log-likelihood function
num_grad <- numDeriv::grad(func = llgkw, x = mle_par_gr, data = sample_data)
# Analytical gradient (output of grgkw)
ana_grad <- grgkw(par = mle_par_gr, data = sample_data)
cat("Numerical Gradient:\n")
print(num_grad)
cat("Analytical Gradient:\n")
print(ana_grad)
# Check differences (should be small)
cat("Max absolute difference between gradients:\n")
print(max(abs(num_grad - ana_grad)))
} else {
cat("\nSkipping gradient comparison (requires 'numDeriv' package or convergence).\n")
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.