Description Usage Arguments Details Value See Also Examples
View source: R/p_value_of_the_Bayesian_sense_for_chi_square_goodness_of_fit.R
Calculates the p value of the chi-squared test statistic for our model.
Get the Chi square values
χ ( D_i |θ_j )
for all possible pairs of synthesized data-sets D_1,D_2,....,D_i,.... and MCMC samples θ_1,θ_2,....,θ_i,.....
1 2 3 4 5 6 7 8 9 |
StanS4class |
An S4 object of class To be passed to |
dig |
A variable to be passed to the function |
Colour |
Logical: |
plot.replicated.points |
TRUE or FALSE. If true, then plot replicated points (hits, false alarms) by the scatter plot. This process will takes a long times. So if user has no time, then |
head.only |
Logical: |
counter.plot.via.schatter.plot |
Logical: |
Show.table |
Logical: |
Here, we briefly review how to get the chi square samples in the Bayesian paradigm.
First, Let
f(y|θ)
be a model (likelihood) for a future data-set y and a model parameter θ. Let
π(θ|D)
be the posterior for given data D. In this situation, the Hamiltonian Monte Carlo method is performed to obtain the MCMC samples of size N. Denote MCMC samples by
θ_1, θ_2, θ_3,...,θ_N
from posterior p(θ|D) of given data D. Alternatively, we get the sequence of models
f(y| θ_1), f(y| θ_2), f(y| θ_3),...,f(y| θ_N).
To get the samples
y_1, y_2,...,y_N
from the posterior predictive distribution, we merely draw the y_1, y_2,...,y_N from f(y| θ_1), f(y| θ_2), f(y| θ_3),...,f(y| θ_N), respectively. That is for all I y_i is drawn from the distribution f(y|θ_i). In notation, it may write;
y_1 \sim f(.| θ_1)
y_2 \sim f(.| θ_2)
y_3 \sim f(.| θ_3)
\cdots
y_N \sim f(.| θ_1N)
Once, we draw samples from the posterior predictive density, we can calculate an arbitrary integral with the posterior measure by the law of large number, or it is sometimes called MonteCarlo integral and we apply it to the following integral which is the desired posterior predictive p-value.
p value for data D:= \int I( χ (Data|θ) > χ (D|θ) ) f(θ|Data) π(θ|D)d θ d (Data)
Recall that the chi square goodness of fit statistics χ is dependent of the model parameter θ and data D. that is,
χ = χ (D|θ).
Integarating χ (D|θ) with the posterior predictive measure, we get the
χ (D)
which depends only of the data D, that is,
So, in the return value of this function is p value.
My hand, especially right has ache, so I quit this documentation, Good Luck, 2019 may 29. I do not have confidence whether my explanation sucess.
In this manner we get the two sequence of samples, one is from the posterior distribution and one is the posterior predictive distribution. Using these two kind of samples, we can calculate the test statistics as the Bayesian manner. That is, in frequentist method, the test statistics are calculated by the fixed model parameters, such as the maximal likelihood estimators. However, in Bayesian context, the parameter is not deterministic and hence we should calculate test statistics with the posterior measure. To accomplish this task, this package include the function.
The main return is a nonnegative real number indicating p value of the Chi square goodness of fit. And the other components to calculate p values.
get_samples_from_Posterior_Predictive_distribution, chi_square_goodness_of_fit_from_input_all_param
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | ## Not run:
# First, fit the model to data. The number of sampling of the Hamiltonian Monte Carlo
# methods should be a little number, if user computer has low ability,
# since the calculation of the posterior predictive p values is heavy.
fit <- fit_Bayesian_FROC(BayesianFROC::dataList.Chakra.1 ,ite = 1111)
# Next, extract the posterior predictive p value from the fitted model object "fit",
# and to do so, we have to make a object "output".
output <- p_value_of_the_Bayesian_sense_for_chi_square_goodness_of_fit(fit)
# From the above R script, the table will appear in the R cosole.
# If the TRUE is more, then model fitting is better.
# Finaly, we obtain the following p value;
p.value <- output$p.values.for.chisquare
# The significant level of p value is 0.05 in frequentist paradium, but,
# In this p value I think it should be more greater, and
# should use e.g., 0.6 instead of 0.05 for significant level.
# If significant level is 0.5, then test
p.value > 0.5
# If it is FALSE, then the fitting is bad.
# If p value is more greater than the fitting is more better.
# If user has no time, then plot.replicated.points=FALSE will help you.
# By setting FALSE, the replicated data from the posterior predictive
# distribution does not draw, and hence the running time of function become shorter.
TPs.FPs <- p_value_of_the_Bayesian_sense_for_chi_square_goodness_of_fit(fit,
plot.replicated.points = FALSE)
# If user want to use the scatter plots of hits and false alarms from the posterior
# predictive distribution for the submission, then the color plot is not appropriate.
# So, by setting the argument Colour = FALSE, the scatter plot become black and white.
# So, user can use this scatter plot for submission.
p_value_of_the_Bayesian_sense_for_chi_square_goodness_of_fit(fit,Colour = FALSE)
# Since p values are depend on data only, so it is better to show this dependency more
# explicitly as follows;
p_value_of_the_Bayesian_sense_for_chi_square_goodness_of_fit(
fit_Bayesian_FROC(dataList.High)
)
# Close the graphic device
Close_all_graphic_devices()
## End(Not run)# dottest
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.