Description Usage Arguments Value Author(s) References See Also Examples

This function produces statistics to compare the predictive performance of the different models component models, as well as for the EBMA model itself, for either the calibration or the test period. It currently calculates the area under the ROC (`auc`

), the `brier`

score, the percent of observations predicted correctly (`percCorrect`

), as well as the proportional reduction in error compared to some baseline model (`pre`

) for binary models. For models with normally distributed outcomes the `CompareModels`

function can be used to calculate the root mean squared error (`rmse`

) as well as the mean absolute error (`mae`

).

1 2 3 4 5 6 7 8 9 10 11 | ```
compareModels(
.forecastData,
.period = "calibration",
.fitStatistics = c("brier", "auc", "perCorrect", "pre"),
.threshold = 0.5,
.baseModel = 0,
...
)
## S4 method for signature 'ForecastData'
compareModels(.forecastData, .period, .fitStatistics, .threshold, .baseModel)
``` |

`.forecastData` |
An object of class 'ForecastData'. |

`.period` |
Can take value of "calibration" or "test" and indicates the period for which the test statistics should be calculated. |

`.fitStatistics` |
A vector naming statistics that should be calculated. Possible values include "auc", "brier", "percCorrect", "pre" for logit models and "mae","rsme" for normal models. |

`.threshold` |
The threshold used to calculate when a "positive" prediction is made by the model for binary dependent variables. |

`.baseModel` |
Vector containing predictions used to calculate proportional reduction of error ("pre"). |

`...` |
Not implemented |

A data object of the class 'CompareModels' with the following slots:

`fitStatistics` |
The output of the fit statistics for each model. |

`period` |
The period, "calibration" or "test", for which the statistics were calculated. |

`threshold` |
The threshold used to calculate when a "positive" prediction is made by the model. |

`baseModel` |
Vector containing predictions used to calculate proportional reduction of error ("pre"). |

Michael D. Ward <michael.d.ward@duke.edu> and Jacob M. Montgomery <jacob.montgomery@wustl.edu> and Florian M. Hollenbach <florian.hollenbach@tamu.edu>

Montgomery, Jacob M., Florian M. Hollenbach and Michael D. Ward. (2012). Improving Predictions Using Ensemble Bayesian Model Averaging. *Political Analysis*. **20**: 271-291.

ensembleBMA, other functions

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | ```
## Not run: data(calibrationSample)
data(testSample)
this.ForecastData <- makeForecastData(.predCalibration=calibrationSample[,c("LMER", "SAE", "GLM")],
.outcomeCalibration=calibrationSample[,"Insurgency"],.predTest=testSample[,c("LMER", "SAE", "GLM")],
.outcomeTest=testSample[,"Insurgency"], .modelNames=c("LMER", "SAE", "GLM"))
this.ensemble <- calibrateEnsemble(this.ForecastData, model="logit", tol=0.001, exp=3)
compareModels(this.ensemble,"calibration")
compareModels(this.ensemble,"test")
## End(Not run)
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.