get_evaluation: Extract evaluation metrics from cross-validated model

View source: R/get_evaluation.R

get_evaluationR Documentation

Extract evaluation metrics from cross-validated model

Description

Extracts aggregated performance metrics from a model evaluated with rf_evaluate().

Usage

get_evaluation(model)

Arguments

model

Model object with class rf_evaluate from rf_evaluate().

Details

This function returns aggregated statistics across all cross-validation repetitions. The "Testing" model metrics indicate the model's ability to generalize to unseen spatial locations.

Value

Data frame with aggregated evaluation metrics containing:

  • model: Model type - "Full" (original model), "Training" (trained on training folds), or "Testing" (performance on testing folds, representing generalization ability).

  • metric: Metric name - "rmse", "nrmse", "r.squared", or "pseudo.r.squared".

  • mean, sd, min, max: Summary statistics across cross-validation repetitions.

See Also

rf_evaluate(), plot_evaluation(), print_evaluation()

Other model_info: get_importance(), get_importance_local(), get_moran(), get_performance(), get_predictions(), get_residuals(), get_response_curves(), get_spatial_predictors(), print.rf(), print_evaluation(), print_importance(), print_moran(), print_performance()

Examples


if(interactive()){

data(plants_rf, plants_xy)

# Evaluate model with spatial cross-validation
m_evaluated <- rf_evaluate(
  model = plants_rf,
  xy = plants_xy,
  repetitions = 5,
  n.cores = 1
)

# Extract evaluation metrics
eval_metrics <- get_evaluation(m_evaluated)
eval_metrics

}


spatialRF documentation built on Dec. 20, 2025, 1:07 a.m.