Description Usage Arguments Details Value Note Author(s) See Also Examples
This function is a graphic tool to represent evaluation scores of models produced with biomod2 according to 2 different evaluation methods. Models can be grouped in several ways (by algo, by CV run, ...) to highlight potential differences in models quality due to chosen models, cross validation sampling bias,... Each point represents the average evaluation score across each group. Lines represents standard deviation of evaluation scores of the group.
1 2 3 4 5 |
obj |
a |
metrics |
character vector of 2 chosen metrics (e.g c("ROC", "TSS")); if not filled the two first evaluation methods computed at modeling stage will be selected. |
by |
character ('models'), the way evaluation scores are grouped. Should be one of 'models', 'algos', 'CV_run' or 'data_set' (see detail section) |
plot |
logical (TRUE), does plot should be produced |
... |
additional graphical arguments (see details) |
by
argument description :
by
arg refers to the way models scores will be combined to compute mean and sd. It should take the following values:
models
: group evaluation scores according to top level models. Top level models should be for instance GLM, GAM, RF, SRE... in "BIOMOD.models.out"
input case whereas it should be EMcaByTSS (committee averaging using TSS score), EMwmeanByROC (weighted mean using ROC scores),... or whatever in "BIOMOD.EnsembleModeling.out"
input case.
algos
: If you work with "BIOMOD.models.out"
then algos
is equivalent to models
. If you work with "BIOMOD.EnsembleModeling.out"
then it refer to formal models i.e. GLM, GAM, RF, SRE... (should also be mergedAlgo in this case).
cv_run
: the cross validation run e.g. run1, run2,... Should be mergedRun in EnsembleModels input case.
data_set
: the data set (resp. pseudo absences data set) used to group scores e.g PA1, PA2,... if pseudo absences sampling have been computed or AllData inf not. Should also be mergedData in EnsembleModels case.
Additional arguments (...) :
Additional graphical parameters should be.
xlim
the graphical range represent for the first evaluation metric
ylim
the graphical range represent for the second evaluation metric
main
main plot title
A ggplot2 plotting object is return. It means that user should then easily customize this plot (see example)
This function have been instigate by Elith*, J., H. Graham*, C., P. Anderson, R., Dudik, M., Ferrier, S., Guisan, A., J. Hijmans, R., Huettmann, F., R. Leathwick, J., Lehmann, A., Li, J., G. Lohmann, L., A. Loiselle, B., Manion, G., Moritz, C., Nakamura, M., Nakazawa, Y., McC. M. Overton, J., Townsend Peterson, A., J. Phillips, S., Richardson, K., Scachetti-Pereira, R., E. Schapire, R., Soberon, J., Williams, S., S. Wisz, M. and E. Zimmermann, N. (2006), Novel methods improve prediction of species distributions from occurrence data. Ecography, 29: 129-151. doi: 10.1111/j.2006.0906-7590.04596.x (fig 3)
Damien Georges
BIOMOD_Modeling
, BIOMOD_EnsembleModeling
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | ## this example is based on BIOMOD_Modeling function example
example(BIOMOD_Modeling)
## we will need ggplot2 package to produce our custom version of the graphs
require(ggplot2)
## plot evaluation models score graph
### by models
gg1 <- models_scores_graph( myBiomodModelOut,
by = 'models',
metrics = c('ROC','TSS') )
## we see a influence of model selected on models capabilities
## e.g. RF are much better than SRE
### by cross validation run
gg2 <- models_scores_graph( myBiomodModelOut,
by = 'cv_run',
metrics = c('ROC','TSS') )
## there is no difference in models quality if we focus on
## cross validation sampling
### some graphical customisations
gg1_custom <-
gg1 +
ggtitle("Diff between RF and SRE evaluation scores") + ## add title
scale_colour_manual(values=c("green", "blue")) ## change colors
gg1_custom
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.