summary.demonoid.ppc: Posterior Predictive Check Summary

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/summary.demonoid.ppc.R

Description

This may be used to summarize either new, unobserved instances of y (called y[new]) or replicates of y (called y[rep]). Either y[new] or y[rep] is summarized, depending on predict.demonoid.

Usage

1
2
3
## S3 method for class 'demonoid.ppc'
summary(object, Categorical, Rows,
     Discrep, d, Quiet, ...)

Arguments

object

An object of class demonoid.ppc is required.

Categorical

Logical. If TRUE, then y and yhat are considered to be categorical (such as y=0 or y=1), rather than continuous.

Rows

An optional vector of row numbers, for example c(1:10). All rows will be estimated, but only these rows will appear in the summary.

Discrep

A character string indicating a discrepancy test. Discrep defaults to NULL. Valid character strings when y is continuous are: "Chi-Square", "Chi-Square2 ", "Kurtosis", "L-criterion", "MASE", "MSE", "PPL", "Quadratic Loss", "Quadratic Utility", "RMSE", "Skewness", "max(yhat[i,]) > max(y)", "mean(yhat[i,]) > mean(y)", "mean(yhat[i,] > d)", "mean(yhat[i,] > mean(y))", "min(yhat[i,]) < min(y)", "round(yhat[i,]) = d", and "sd(yhat[i,]) > sd(y)". Valid character strings when y is categorical are: "p(yhat[i,] != y[i])". Kurtosis and skewness are not discrepancies, but are included here for convenience.

d

This is an optional integer to be used with the Discrep argument above, and it defaults to d=0.

Quiet

This logical argument defaults to FALSE and will print results to the console. When TRUE, results are not printed.

...

Additional arguments are unused.

Details

This function summarizes an object of class demonoid.ppc, which consists of posterior predictive checks on either y[new] or y[rep], depending respectively on whether unobserved instances of y or the model sample of y was used in the predict.demonoid function.

The purpose of a posterior predictive check is to assess how well (or poorly) the model fits the data, or to assess discrepancies between the model and the data. For more information on posterior predictive checks, see https://web.archive.org/web/20150215050702/http://www.bayesian-inference.com/posteriorpredictivechecks.

When y is continuous and known, this function estimates the predictive concordance between y and y[rep] as per Gelfand (1996), and the predictive quantile (PQ), which is for record-level outlier detection used to calculate Gelfand's predictive concordance.

When y is categorical and known, this function estimates the record-level lift, which is p(yhat[i,] = y[i]) / [p(y = j) / n], or the number of correctly predicted samples over the rate of that category of y in vector y.

A discrepancy measure is an approach to studying discrepancies between the model and data (Gelman et al., 1996). Below is a list of discrepancy measures, followed by a brief introduction to discrepancy analysis:

After observing a discrepancy statistic, the user attempts to improve the model by revising the model to account for discrepancies between data and the current model. This approach to model revision relies on an analysis of the discrepancy statistic. Given a discrepancy measure that is based on model fit, such as the L-criterion, the user may correlate the record-level discrepancy statistics with the dependent variable, independent variables, and interactions of independent variables. The discrepancy statistic should not correlate with the dependent and independent variables. Interaction variables may be useful for exploring new relationships that are not in the current model. Alternatively, a decision tree may be applied to the record-level discrepancy statistics, given the independent variables, in an effort to find relationships in the data that may be helpful in the model. Model revision may involve the addition of a finite mixture component to account for outliers in discrepancy, or specifying the model with a distribution that is more robust to outliers. There are too many suggestions to include here, and discrepancy analysis varies by model.

Value

This function returns a list with the following components:

BPIC

The Bayesian Predictive Information Criterion (BPIC) was introduced by Ando (2007). BPIC is a variation of the Deviance Information Criterion (DIC) that has been modified for predictive distributions. For more information on DIC (Spiegelhalter et al., 2002), see the accompanying vignette entitled "Bayesian Inference". BPIC = Dbar + 2pD. The goal is to minimize BPIC.

Concordance

This is the percentage of the records of y that are within the 95% quantile-based probability interval (see p.interval) of y[rep]. Gelfand's suggested goal is to achieve 95% predictive concordance. Lower percentages indicate too many outliers and a poor fit of the model to the data, and higher percentages may suggest overfitting. Concordance occurs only when y is continuous.

Mean Lift

This is the mean of the record-level lifts, and occurs only when y is specified as categorical with Categorical=TRUE.

Discrepancy.Statistic

This is only reported if the Discrep argument receives a valid discrepancy measure as listed above. The Discrep applies to each record of y, and the Discrepancy.Statistic reports the results of the discrepancy measure on the entire data set. For example, if Discrep="min(yhat[i,]) < min(y)", then the overall result is the proportion of records in which the minimum sample of yhat was less than the overall minimum y. This is Pr(min(yhat[i,]) < min(y) | y, Theta), where Theta is the parameter set.

L-criterion

The L-criterion (Laud and Ibrahim, 1995) was developed for model and variable selection. It is a sum of two components: one involves the predictive variance and the other includes the accuracy of the means of the predictive distribution. The L-criterion measures model performance with a combination of how close its predictions are to the observed data and variability of the predictions. Better models have smaller values of L. L is measured in the same units as the response variable, and measures how close the data vector y is to the predictive distribution. In addition to the value of L, there is a value for S.L, which is the calibration number of L, and is useful in determining how much of a decrease is necessary between models to be noteworthy.

Summary

When y is continuous, this is a N x 8 matrix, where N is the number of records of y and there are 8 columns, as follows: y, Mean, SD, LB (the 2.5% quantile), Median, UB (the 97.5% quantile), PQ (the predictive quantile, which is Pr(y[rep] >= y)), and Test, which shows the record-level result of a test, if specified. When y is categorical, this matrix has a number of columns equal to the number of categories of y plus 3, also including y, Lift, and Discrep.

Author(s)

Statisticat, LLC.

References

Ando, T. (2007). "Bayesian Predictive Information Criterion for the Evaluation of Hierarchical Bayesian and Empirical Bayes Models". Biometrika, 94(2), p. 443–458.

Gelfand, A. (1996). "Model Determination Using Sampling Based Methods". In Gilks, W., Richardson, S., Spiegehalter, D., Chapter 9 in Markov Chain Monte Carlo in Practice. Chapman and Hall: Boca Raton, FL.

Gelfand, A. and Ghosh, S. (1998). "Model Choice: A Minimum Posterior Predictive Loss Approach". Biometrika, 85, p. 1–11.

Gelman, A., Meng, X.L., and Stern H. (1996). "Posterior Predictive Assessment of Model Fitness via Realized Discrepancies". Statistica Sinica, 6, p. 733–807.

Laud, P.W. and Ibrahim, J.G. (1995). "Predictive Model Selection". Journal of the Royal Statistical Society, B 57, p. 247–262.

Spiegelhalter, D.J., Best, N.G., Carlin, B.P., and van der Linde, A. (2002). "Bayesian Measures of Model Complexity and Fit (with Discussion)". Journal of the Royal Statistical Society, B 64, p. 583–639.

See Also

LaplacesDemon, predict.demonoid, and p.interval.

Examples

1
### See the LaplacesDemon function for an example.

LaplacesDemonR/LaplacesDemon documentation built on Aug. 15, 2018, 4:34 a.m.