View source: R/plot_evaluation.R
| plot_evaluation | R Documentation |
Creates boxplots comparing model performance metrics across training, testing, and full datasets from spatial cross-validation performed by rf_evaluate(). Displays distributions of R-squared, RMSE, and other metrics across all spatial folds.
plot_evaluation(
model,
fill.color = viridis::viridis(3, option = "F", alpha = 0.8, direction = -1),
line.color = "gray30",
verbose = TRUE,
notch = TRUE
)
model |
Model fitted with |
fill.color |
Character vector with three colors (one for each model type: Testing, Training, Full) or a function that generates a color palette. Accepts hexadecimal codes (e.g., |
line.color |
Character string specifying the color of boxplot borders. Default: |
verbose |
Logical. If |
notch |
Logical. If |
This function visualizes the distribution of performance metrics across spatial folds, with separate boxplots for three model variants:
Testing: Performance on spatially independent testing folds (most reliable estimate of generalization)
Training: Performance on training folds (typically optimistic)
Full: Performance on the complete dataset (reference baseline)
Interpreting the plot:
The boxplots show the distribution of each metric across all spatial folds. Ideally:
Testing performance should be reasonably close to training performance (indicates good generalization)
Large gaps between training and testing suggest overfitting
Low variance across folds indicates stable, consistent model performance
High variance suggests performance depends strongly on spatial location
The plot includes a title showing the number of spatial folds used in the evaluation.
Available metrics:
Displayed metrics depend on the response variable type:
Continuous response: R-squared, RMSE (Root Mean Squared Error), NRMSE (Normalized RMSE)
Binary response: AUC (Area Under ROC Curve), pseudo R-squared
ggplot object that can be further customized or saved. The plot displays boxplots of performance metrics (R-squared, RMSE, NRMSE, pseudo R-squared, or AUC depending on model type) across spatial folds, faceted by metric.
rf_evaluate(), get_evaluation(), print_evaluation()
Other visualization:
plot_importance(),
plot_moran(),
plot_optimization(),
plot_residuals_diagnostics(),
plot_response_curves(),
plot_response_surface(),
plot_training_df(),
plot_training_df_moran(),
plot_tuning()
if(interactive()){
data(plants_rf, plants_xy)
# Perform spatial cross-validation
plants_rf <- rf_evaluate(
model = plants_rf,
xy = plants_xy,
repetitions = 5,
n.cores = 1
)
# Visualize evaluation results
plot_evaluation(plants_rf)
# Without notches for simpler boxplots
plot_evaluation(plants_rf, notch = FALSE)
# Custom colors
plot_evaluation(
plants_rf,
fill.color = c("#E64B35FF", "#4DBBD5FF", "#00A087FF")
)
# Print summary statistics
print_evaluation(plants_rf)
# Extract evaluation data for custom analysis
evaluation_data <- get_evaluation(plants_rf)
head(evaluation_data)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.