Description Usage Arguments Details Value References

Computes the bias-variance decomposition of the misclassification rate according to the approaches of James (2003) and Domingos (2000).

1 2 3 4 5 6 7 |

`y` |
Predicted class labels on a test data set based on multiple training data sets.
For the default method |

`grouping` |
Vector of true class labels (a |

`ybayes` |
(Optional.) Bayes prediction (a |

`posterior` |
(Optional.) Matrix of posterior probabilities, either known or estimated. It is assumed
that the columns are ordered according to the factor levels of |

`ybest` |
Prediction from the best fitting model on the whole population (a |

`...` |
Currently unused. |

If `posterior`

is specified, `ybayes`

is calculated from the posterior probabilities and
the posteriors are used to calculate/estimate noise, the misclassification rate, systematic effect
and variance effect.
If `ybayes`

is specified it is ignored if `posterior`

is given. Otherwise the empirical
distribution of `ybayes`

is inferred and used to calculate the quantities of interest.
If neither `posterior`

nor `ybayes`

are specified it is assumed that the noise level is
zero and the remaining quantities are calculated based on this supposition.

A `data.frame`

with the following columns:

`error` |
Estimated misclassification probability. |

`noise` |
(Only if |

`bias` |
Bias. |

`model.bias` |
(Only if |

`estimation.bias` |
(Only if |

`variance` |
Variance. |

`unbiased.variance` |
Unbiased variance. |

`biased.variance` |
Biased variance. |

`net.variance` |
Point-wise net variance. |

`systematic.effect` |
Systematic effect. |

`systematic.model.effect` |
(Only if |

`systematic.estimation.effect` |
(Only if |

`variance.effect` |
Variance effect. |

`ymain` |
Main prediction. |

`ybayes` |
(Only if |

`size` |
Numeric vector of the same length as the number of test observations. The number of predictions made for each test observation. |

Domingos, P. (2000). A unified bias-variance decomposition for zero-one and squared loss. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 564–569. AAAI Press / The MIT Press.

James, G. M. (2003). Variance and bias for general loss functions. *Machine Learning*, **51(2)** 115–135.

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.