Description Usage Arguments Value References See Also
View source: R/generateCalibration.R
A calibrated classifier is one where the predicted probability of a class closely matches the rate at which that class occurs, e.g. for data points which are assigned a predicted probability of class A of .8, approximately 80 percent of such points should belong to class A if the classifier is well calibrated. This is estimated empirically by grouping data points with similar predicted probabilities for each class, and plotting the rate of each class within each bin against the predicted probability bins.
1 2 | generateCalibrationData(obj, breaks = "Sturges", groups = NULL,
task.id = NULL)
|
obj |
[(list of) |
breaks |
[ |
groups |
[ |
task.id |
[ |
[CalibrationData]. A list
containing:
proportion |
[
|
data |
[
|
task |
[ |
Vuk, Miha, and Curk, Tomaz. “ROC Curve, Lift Chart, and Calibration Plot.” Metodoloski zvezki. Vol. 3. No. 1 (2006): 89-108.
Other generate_plot_data: generateCritDifferencesData
,
generateFeatureImportanceData
,
generateFilterValuesData
,
generateFunctionalANOVAData
,
generateLearningCurveData
,
generatePartialDependenceData
,
generateThreshVsPerfData
,
getFilterValues
,
plotFilterValues
Other calibration: plotCalibration
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.