llgkw: Negative Log-Likelihood for the Generalized Kumaraswamy...

View source: R/RcppExports.R

llgkwR Documentation

Negative Log-Likelihood for the Generalized Kumaraswamy Distribution

Description

Computes the negative log-likelihood function for the five-parameter Generalized Kumaraswamy (GKw) distribution given a vector of observations. This function is designed for use in optimization routines (e.g., maximum likelihood estimation).

Usage

llgkw(par, data)

Arguments

par

A numeric vector of length 5 containing the distribution parameters in the order: alpha (\alpha > 0), beta (\beta > 0), gamma (\gamma > 0), delta (\delta \ge 0), lambda (\lambda > 0).

data

A numeric vector of observations. All values must be strictly between 0 and 1 (exclusive).

Details

The probability density function (PDF) of the GKw distribution is given in dgkw. The log-likelihood function \ell(\theta) for a sample \mathbf{x} = (x_1, \dots, x_n) is:

\ell(\theta | \mathbf{x}) = n\ln(\lambda\alpha\beta) - n\ln B(\gamma,\delta+1) + \sum_{i=1}^{n} [(\alpha-1)\ln(x_i) + (\beta-1)\ln(v_i) + (\gamma\lambda-1)\ln(w_i) + \delta\ln(z_i)]

where \theta = (\alpha, \beta, \gamma, \delta, \lambda), B(a,b) is the Beta function (beta), and:

  • v_i = 1 - x_i^{\alpha}

  • w_i = 1 - v_i^{\beta} = 1 - (1-x_i^{\alpha})^{\beta}

  • z_i = 1 - w_i^{\lambda} = 1 - [1-(1-x_i^{\alpha})^{\beta}]^{\lambda}

This function computes -\ell(\theta|\mathbf{x}).

Numerical stability is prioritized using:

  • lbeta function for the log-Beta term.

  • Log-transformations of intermediate terms (v_i, w_i, z_i) and use of log1p where appropriate to handle values close to 0 or 1 accurately.

  • Checks for invalid parameters and data.

Value

Returns a single double value representing the negative log-likelihood (-\ell(\theta|\mathbf{x})). Returns a large positive value (e.g., Inf) if any parameter values in par are invalid according to their constraints, or if any value in data is not in the interval (0, 1).

Author(s)

Lopes, J. E.

References

Cordeiro, G. M., & de Castro, M. (2011). A new family of generalized distributions. Journal of Statistical Computation and Simulation

Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2), 79-88.

See Also

dgkw, pgkw, qgkw, rgkw, grgkw, hsgkw (gradient and Hessian functions, if available), optim, lbeta, log1p

Examples


# Generate sample data from a known GKw distribution
set.seed(123)
true_par <- c(alpha = 2, beta = 3, gamma = 1.0, delta = 0.5, lambda = 0.5)
sample_data <- rgkw(100, alpha = true_par[1], beta = true_par[2],
                   gamma = true_par[3], delta = true_par[4], lambda = true_par[5])
hist(sample_data, breaks = 20, main = "Sample GKw Data")

# --- Maximum Likelihood Estimation using optim ---
# Initial parameter guess (can be crucial)
start_par <- c(1.5, 2.5, 1.2, 0.3, 0.6)

# Perform optimization (minimizing negative log-likelihood)
# Ensure data is passed correctly to llgkw
mle_result <- stats::optim(par = start_par,
                           fn = llgkw,
                           method = "BFGS", # Method supporting bounds might be safer
                           hessian = TRUE,
                           data = sample_data)

# Check convergence and results
if (mle_result$convergence == 0) {
  print("Optimization converged successfully.")
  mle_par <- mle_result$par
  print("Estimated parameters:")
  print(mle_par)
  print("True parameters:")
  print(true_par)

  # Standard errors from Hessian (optional)
  # fisher_info <- solve(mle_result$hessian) # Need positive definite Hessian
  # std_errors <- sqrt(diag(fisher_info))
  # print("Approximate Standard Errors:")
  # print(std_errors)

} else {
  warning("Optimization did not converge!")
  print(mle_result$message)
}

# --- Compare numerical and analytical derivatives (if available) ---
# Requires the 'numDeriv' package and analytical functions 'grgkw', 'hsgkw'
if (requireNamespace("numDeriv", quietly = TRUE) &&
    exists("grgkw") && exists("hsgkw") && mle_result$convergence == 0) {

  cat("\nComparing Derivatives at MLE estimates:\n")

  # Numerical derivatives
  num_grad <- numDeriv::grad(func = llgkw, x = mle_par, data = sample_data)
  num_hess <- numDeriv::hessian(func = llgkw, x = mle_par, data = sample_data)

  # Analytical derivatives (assuming they exist)
  # Note: grgkw/hsgkw might compute derivatives of log-likelihood,
  # while llgkw is negative log-likelihood. Adjust signs if needed.
  # Assuming grgkw/hsgkw compute derivatives of NEGATIVE log-likelihood here:
  ana_grad <- grgkw(par = mle_par, data = sample_data)
  ana_hess <- hsgkw(par = mle_par, data = sample_data)

  # Check differences (should be small if analytical functions are correct)
  cat("Difference between numerical and analytical gradient:\n")
  print(summary(abs(num_grad - ana_grad)))

  cat("Difference between numerical and analytical Hessian:\n")
  print(summary(abs(num_hess - ana_hess)))

} else {
   cat("\nSkipping derivative comparison.\n")
   cat("Requires 'numDeriv' package and functions 'grgkw', 'hsgkw'.\n")
}




gkwreg documentation built on April 16, 2025, 1:10 a.m.