Description Usage Arguments Details Value Author(s) References See Also Examples
Calculates several model performance metrics.
1 |
sim |
simulated/predicted/modeled values |
obs |
observed/reference values |
groups |
vector of groups to compute objective functions for several subsets |
The function computes several model performance metrics. These metrics are commonly used for the evaluation and optimization of environmental models (Janssen and Heuberger 1995, Legates et al. 1999, Krause et al. 2005, Gupta et al. 2009). The following metrics are implemented:
Pearson correlation coefficient and p-value: cor.test
Spearman correlation coefficient and p-value: cor.test
Slope of linear regression: lm
Coefficient of determination: lm
Metrics as described in Janssen and Heuberger (1995):
Average error
Normalized average error
Fractional mean bias
Relative mean bias
Fractional variance
Variance ratio
Kolmogorov-Smirnov statistic: ks.test
Root mean squared error
Normalized RMSE
Index of agreement
Mean absolute error
Normalized mean absolute error
Maximal absolute error
Median absolute error
Upper quartile absolute error
Ratio of scatter
Modelling efficiency (Nash-Sutcliffe efficiency)
Percent bias
Sum squared error
Mean squared error
Kling-Gupta efficiency (Gupta et al. 2009):
Kling-Gupta efficiency
fractional contribution of bias
fractional contribution of variance
fractional contribution of correlation
An object of class "ObjFct" which is actually a list.
Matthias Forkel <matthias.forkel@geo.tuwien.ac.at> [aut, cre]
Gupta, H. V., H. Kling, K. K. Yilmaz, and G. F. Martinez (2009), Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling, Journal of Hydrology, 377(1-2), 80-91, doi:10.1016/j.jhydrol.2009.08.003.
Janssen, P. H. M., and P. S. C. Heuberger (1995), Calibration of process-oriented models, Ecological Modelling, (83), 55-66.
Krause, P., D. P. Boyle, and F. Baese (2005), Comparison of different efficiency criteria for hydrological model assessment, Adv. Geosci., 5, 89-97, doi:10.5194/adgeo-5-89-2005.
Legates, D. R., and G. J. McCabe (1999), Evaluating the use of "goodness-of-fit" Measures in hydrologic and hydroclimatic model validation, Water Resour. Res., 35(1), 233-241, doi:10.1029/1998WR900018.
plot.ObjFct
, ObjFct2Text
, WollMilchSauPlot
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | obs <- 1:100 # 'observations'
# simulated and observed values agree
sim <- obs
ObjFct(sim, obs)
# simulation has a bias
sim <- obs - 50
ObjFct(sim, obs)
# negative correlation
sim <- 100:1
ObjFct(sim, obs)
# same mean, same correlation but smaller variance
sim <- 0.5 * obs + 25.25
ObjFct(sim, obs)
# small scatter around observations
sim <- obs * rnorm(100, 1, 0.1)
ObjFct(sim, obs)
# larger scatter around observations
sim <- obs * rnorm(100, 1, 0.8)
ObjFct(sim, obs)
# bias and larger scatter around observations
sim <- obs * rnorm(100, 2, 0.8)
ObjFct(sim, obs)
ScatterPlot(obs, sim, objfct=TRUE)
# simulation is independent from observations
sim <- rnorm(100, 0, 1)
ObjFct(sim, obs)
# split by groups
sim <- obs * c(rnorm(40, 1, 0.2), rnorm(60, 1.2, 0.5))
groups <- c(rep("subset 1", 40), rep("subset 2", 60))
of <- ObjFct(sim, obs, groups=groups)
of
ScatterPlot(obs, sim, groups=groups, objfct=TRUE)
# convert objective functions to text
ObjFct2Text(of)
ObjFct2Text(of, which="KGE")
ObjFct2Text(of, which=c("R2", "MEF", "KGE"), sep=" ")
# plot scatterplot of two metrics
plot(of)
plot(of, which=c("MEF", "RMSE"))
# analyze relations between objective functions
#----------------------------------------------
# Some objective function metrics are closely related to others.
# This simple example demonstrates the relations bewteen metrics.
# Therefore, several experiments are performed. In each experiment,
# simulation with different biases, correlations, and variance in
# comparison with the observations are created.
# Objective functions are then computed for all experiments.
# the 'observations'
obs <- 1:100
# experiments: create several 'simulations'
n <- 500 # how many experiments?
data <- data.frame(obs=NA, sim=NA, exp=NA)
for (i in 1:n) {
fbias <- runif(1, 0.01, 2) # factor to create the bais
fcor <- runif(1, -2.5, 2.5) # factor to create a different correlation
fsd <- runif(1, 0.1, 2) # factor to create variability
sim <- fbias * obs^(fcor) # create simulations
sim <- sim + rnorm(length(sim), 0, fsd*abs(mean(sim, na.rm=TRUE)))
data <- rbind(data, data.frame(obs=obs, sim=sim, exp=i))
}
data <- na.omit(data)
# plot the first 5 experiments:
plot(sim ~ obs, data=data[1:500,], col=data$exp[1:500], pch=16)
# compute objective function metrics for each experiment
of <- ObjFct(data$sim, data$obs, groups=data$exp)
hist(of$Cor)
# check relations between metrics
plot(of, c("Cor", "Spearman"))
plot(of, c("Cor", "R2"))
plot(of, c("Cor", "IoA"))
plot(of, c("RMSE", "AE"))
plot(of, c("RMSE", "MEF"))
plot(of, c("KS", "MEF"))
plot(of, c("NME", "MEF"))
plot(of, c("NMSE", "MEF"))
plot(of, c("NME", "NAE"))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.