optimflex provides a highly flexible suite of derivative-based
non-linear optimization algorithms. It is specifically designed for
researchers who require rigorous convergence control, particularly
in complex models like SEM.
Standard optimization functions often rely on a single, fixed stopping
rule. optimflex offers:
# install.packages("devtools")
# devtools::install_github("yourusername/optimflex")
library(optimflex)
rosenbrock <- function(x) {
100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2
}
res <- double_dogleg(
start = c(-1.2, 1.0),
objective = rosenbrock,
control = list(
use_grad = TRUE,
use_rel_x = TRUE,
use_posdef = TRUE
)
)
print(res$par)
#> [1] 0.9999955 0.9999910
| Flag | Description |
|:-------------|:--------------------------|
| use_abs_f | Absolute function change |
| use_rel_f | Relative function change |
| use_abs_x | Absolute parameter change |
| use_rel_x | Relative parameter change |
| use_grad | Gradient infinity norm |
| use_posdef | Hessian Verification |
| use_pred_f | Predicted Decrease |
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.