fit_Bayesian_FROC: Fit a model to data

Description Usage Arguments Details Value References See Also Examples

View source: R/fit_Bayesian_FROC.R

Description

Creates a fitted model object of class stanfitExtended: an inherited class from the S4 class stanfit in rstan.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
fit_Bayesian_FROC(
  dataList,
  ModifiedPoisson = FALSE,
  prior = -1,
  verbose = FALSE,
  print_CI_of_AUC = TRUE,
  samples_from_likelihood_for_ppp = 11,
  multinomial = TRUE,
  model_reparametrized = FALSE,
  Model_MRMC_non_hierarchical = TRUE,
  type_to_be_passed_into_plot = "l",
  ww = -11,
  www = 11,
  mm = 0.65,
  mmm = 11,
  vv = 5.31,
  vvv = 11,
  zz = 1.55,
  zzz = 11,
  prototype = FALSE,
  PreciseLogLikelihood = TRUE,
  DrawCurve = length(dataList$m) == 0,
  Drawcol = TRUE,
  summary = TRUE,
  mesh.for.drawing.curve = 1000,
  significantLevel = 0.7,
  new.imaging.device = TRUE,
  cha = 1,
  ite = 10000,
  DrawFROCcurve = TRUE,
  DrawAFROCcurve = FALSE,
  DrawCFPCTP = TRUE,
  dig = 5,
  war = floor(ite/5),
  see = 1234,
  Null.Hypothesis = FALSE,
  ...
)

Arguments

dataList

A list, specifying an FROC data to be fitted a model. It consists of data of numbers of TPs, FPs, lesions, images. .In addition, if in case of mutiple readers or mutiple modalities, then modaity ID and reader ID are included also.

The dataList will be passed to the function rstan::sampling() of rstan. This is a variable in the function rstan::sampling() in which it is named data.

For the single reader and a single modality data, the dataList is made by the following manner:

dataList.Example <- list(

h = c(41,22,14,8,1), # number of hits for each confidence level

f = c(1,2,5,11,13), # number of false alarms for each confidence level

NL = 124, # number of lesions (signals)

NI = 63, # number of images (trials)

C = 5) # number of confidence, .. the author thinks it can be calculated as the length of h or f ...? ha, why I included this. ha .. should be omitted.

Using this object dataList.Example, we can apply fit_Bayesian_FROC() such as fit_Bayesian_FROC(dataList.Example).

To make this R object dataList representing FROC data, this package provides three functions:

dataset_creator_new_version()

Enter TP and FP data by table .

create_dataset()

Enter TP and FP data by interactive manner.

Before fitting a model, we can confirm our dataset is correctly formulated by using the function viewdata().

—————————————————————————————-

A Single reader and a single modality (SRSC) case.

—————————————————————————————-

In a single reader and a single modality case (srsc), dataList is a list consisting of f, h, NL, NI, C where f, h are numeric vectors and NL, NI, C are positive integers.

f

Non-negative integer vector specifying number of false alarms associated with each confidence level. The first component corresponding to the highest confidence level.

h

Non-negative integer vector specifying number of Hits associated with each confidence level. The first component corresponding to the highest confidence level.

NL

A positive integer, representing Number of Lesions.

NI

A positive integer, representing Number of Images.

C

A positive integer, representing Number of Confidence level.

The detail of these dataset, see the datasets endowed with this package. 'Note that the maximal number of confidence level, denoted by C, are included, however, Note that confidence level vector c should not be specified. If specified, will be ignored , since it is created by c <-c(rep(C:1)) in the inner program and do not refer from user input data, where C is the highest number of confidence levels. So, you should write down your hits and false alarms vector so that it is compatible with this automatically created c vector.

data Format:

A single reader and a single modality case

——————————————————————————————————

NI=63,NL=124 confidence level No. of false alarms No. of hits
In R console -> c f h
----------------------- ----------------------- ----------------------------- -------------
definitely present c[1] = 5 f[1] = F_5 = 1 h[1] = H_5 = 41
probably present c[2] = 4 f[2] = F_4 = 2 h[2] = H_4 = 22
equivocal c[3] = 3 f[3] = F_3 = 5 h[3] = H_3 = 14
subtle c[4] = 2 f[4] = F_2 = 11 h[4] = H_2 = 8
very subtle c[5] = 1 f[5] = F_1 = 13 h[5] = H_1 = 1

—————————————————————————————————

* false alarms = False Positives = FP

* hits = True Positives = TP

Note that in FROC data, all confidence level means present (diseased, lesion) case only, no confidence level indicating absent. Since each reader marks his suspicious location only if he thinks lesions are present, and marked positions generates the hits or false alarms, thus each confidence level represents that lesion is present. In the absent case, reader does not mark any locations and hence, the absent confidence level does not relate this dataset. So, if reader think it is no lesion, then in such case confidence level is not needed.

Note that the first column of confidence level vector c should not be specified. If specified, will be ignored , since it is created by c <-c(rep(C:1)) automatically in the inner program and do not refer from user input data even if it is specified explicitly, where C is the highest number of confidence levels. So you should check the compatibility of your data and the confidence level vector c <-c(rep(C:1)) via a table which can be displayed by the function viewdata().

—————————————————————————————

Multiple readers and multiple modalities case, i.e., MRMC case

—————————————————————————————

In case of multiple readers and multiple modalities, i.e., MRMC case, in order to apply the function fit_Bayesian_FROC(), dataset represented by an R list object representing FROC data must contain components m,q,c,h,f,NL,C,M,Q.

C

A positive integer, representing the highest number of confidence level, this is a scalar.

M

A positive integer vector, representing the number of modalities.

Q

A positive integer, representing the number of readers.

m

A vector of positive integers, representing the modality ID vector.

q

A vector of positive integers, representing the reader ID vector.

c

A vector of positive integers, representing the confidence level. This vector must be made by rep(rep(C:1), M*Q)

h

A vector of non-negative integers, representing the number of hits.

f

A vector of non-negative integers, representing the number of false alarms.

NL

A positive integer, representing the Total number of lesions for all images, this is a scalar.

Note that the maximal number of confidence level (denoted by C) are included in the above R object. However, each confidence level vector is not included in the data, because it is created automatically from C. To confirm false positives and hits are correctly ordered with respect to the automatically generated confidence vector,

the function viewdata() shows the table. Revised 2019 Nov 27 Revised 2019 Dec 5

Example data.

Multiple readers and multiple modalities ( i.e., MRMC)

—————————————————————————————————

Modality ID Reader ID Confidence levels No. of false alarms No. of hits.
m q c f h
-------------- ------------- ------------------------ ------------------- ----------------
1 1 3 20 111
1 1 2 29 55
1 1 1 21 22
1 2 3 6 100
1 2 2 15 44
1 2 1 22 11
2 1 3 6 66
2 1 2 24 55
2 1 1 23 1
2 2 3 5 66
2 2 2 30 55
2 2 1 40 44

—————————————————————————————————

* false alarms = False Positives = FP

* hits = True Positives = TP

ModifiedPoisson

Logical, that is TRUE or FALSE.

If ModifiedPoisson = TRUE, then Poisson rate of false alarm is calculated per lesion, and a model is fitted so that the FROC curve is an expected curve of points consisting of the pairs of TPF per lesion and FPF per lesion.

Similarly,

If ModifiedPoisson = TRUE, then Poisson rate of false alarm is calculated per image, and a model is fitted so that the FROC curve is an expected curve of points consisting of the pair of TPF per lesion and FPF per image.

For more details, see the author's paper in which I explained per image and per lesion. (for details of models, see vignettes , now, it is omiited from this package, because the size of vignettes are large.)

If ModifiedPoisson = TRUE, then the False Positive Fraction (FPF) is defined as follows (F_c denotes the number of false alarms with confidence level c )

\frac{F_1+F_2+F_3+F_4+F_5}{N_L},

\frac{F_2+F_3+F_4+F_5}{N_L},

\frac{F_3+F_4+F_5}{N_L},

\frac{F_4+F_5}{N_L},

\frac{F_5}{N_L},

where N_L is a number of lesions (signal). To emphasize its denominator N_L, we also call it the False Positive Fraction (FPF) per lesion.

On the other hand,

if ModifiedPoisson = FALSE (Default), then False Positive Fraction (FPF) is given by

\frac{F_1+F_2+F_3+F_4+F_5}{N_I},

\frac{F_2+F_3+F_4+F_5}{N_I},

\frac{F_3+F_4+F_5}{N_I},

\frac{F_4+F_5}{N_I},

\frac{F_5}{N_I},

where N_I is the number of images (trial). To emphasize its denominator N_I, we also call it the False Positive Fraction (FPF) per image.

The model is fitted so that the estimated FROC curve can be ragraded as the expected pairs of FPF per image and TPF per lesion (ModifiedPoisson = FALSE )

or as the expected pairs of FPF per image and TPF per lesion (ModifiedPoisson = TRUE)

If ModifiedPoisson = TRUE, then FROC curve means the expected pair of FPF per lesion and TPF.

On the other hand, if ModifiedPoisson = FALSE, then FROC curve means the expected pair of FPF per image and TPF.

So,data of FPF and TPF are changed thus, a fitted model is also changed whether ModifiedPoisson = TRUE or FALSE. In traditional FROC analysis, it uses only per images (trial). Since we can divide one image into two images or more images, number of trial is not important. And more important is per signal. So, the author also developed FROC theory to consider FROC analysis under per signal. One can see that the FROC curve is rigid with respect to change of a number of images, so, it does not matter whether ModifiedPoisson = TRUE or FALSE. This rigidity of curves means that the number of images is redundant parameter for the FROC trial and thus the author try to exclude it.

Revised 2019 Dec 8 Revised 2019 Nov 25 Revised 2019 August 28

prior

positive integer, to select the prior

verbose

A logical, if TRUE, then the redundant summary is printed in R console. If FALSE, it suppresses output from this function.

print_CI_of_AUC

Logical, if TRUE then Credible intervals of AUCs for each modality are plotted.

samples_from_likelihood_for_ppp

positive integer for sample size. These samples are drawn from likelihood to calculate posterior predictive p value of chi square

multinomial

A logical, if TRUE then model is the most classical one using multinomial distribution.

model_reparametrized

A logical, if TRUE, then a model under construction is used.

Model_MRMC_non_hierarchical

A logical. If TRUE, then the model of multiple readers and multiple modalities consits of no hyper parameters. The reason why the author made this parameter is that the hyper parameter make the MCMC posterior samples be unstable. And also, my hierarachical model is not so good in theoretical perspective. Thus, I made this. The Default is TRUE.

type_to_be_passed_into_plot

"l" or "p".

zz, zzz, ww, www, mm, mmm, vv, vvv

Each of which is a real number specifying one of the parameter of prior

prototype

A logical, if TRUE then the model is no longer a generative model. Namely, in generally speaking, a dataset drawn from the model cannot satisfy the condition that the sum of the numbers of hits over all confidence levels is bounded from the above by the number of lesions, namely,

Σ_c H_c ≤ N_L

However, this model (TRUE ) is good in the sense that it admits various initial values of MCMC sampling.

if FALSE, then the model is precisely statistical model in the sense that any dataset drawn from the model satisfies that the sum of the number of hits is not greater than the number of lesions, namely,

Σ_c H_c ≤ N_L.

This model is theoretically perfect. However, in the practically, the calculation will generates some undesired results which caused by the so-called floo .... I forget English :'-D. The flood point??? I forgeeeeeeeeeeeeet!! Ha. So, prior synthesizes very small hit rates such as 0.0000000000000001234 and it cause the non accurate calculation such as 0.00000,,,00000123/0.000.....000012345= 0.0012 which becomes hit rate and thus OH No!. Then it synthesizes Bernoulli success rate which is not less than 1 !! To avoid this, the author should develop the theory of prior to avoid this very small numbers, however the author has idea but now it does not success.

If prototype = TRUE, then the model for hits is the following:

H_5 \sim Binomial(p_5,N_L)

H_4 \sim Binomial(p_4,N_L)

H_3 \sim Binomial(p_3,N_L)

H_2 \sim Binomial(p_2,N_L)

H_1 \sim Binomial(p_1,N_L)

On the other hand, if prototype = FALSE, then the model for hits is the following:

H_5 \sim Binomial( p_5,N_L )

H_4 \sim Binomial( \frac{p_4}{1-p_5},N_L - H_5)

H_3 \sim Binomial( \frac{p_3}{1-p_5-p_4},N_L - H_5-H_4)

H_2 \sim Binomial( \frac{p_2}{1-p_5-p_4-p_3},N_L - H_5-H_4-H_3)

H_1 \sim Binomial( \frac{p_1}{1-p_5-p_4-p_3-p_2},N_L - H_5-H_4-H_3-H_2)

Each number of lesions is adjusted so that the sum of hits Σ_c H_c is less than the number of lesions (signals, targets) N_L. And hence the model in case of prototype = FALSE is a generative model in the sense that it can replicate datasets of FROC arises. Note that the adjustment of the number of lesions in the above manner leads us the adjustment of hit rates. The reason why we use the hit rates such as \frac{p_2}{1-p_5-p_4-p_3} instead of p_c is that it ensures the equality E[H_c/N_L] = p_c. This equality is very important. To establish Bayesian FROC theory so that it is compatible to the classical FROC theory, we need the following two equations,

E[H_c/N_L] = p_c,

E[F_c/N_X] = q_c,

where E denotes the expectation and N_X is the number of lesion or the number of images and q_c is a false alarm rate, namely, F_c \sim Poisson( q_c N_X).

Using the above two equations, we can establish the alternative Bayesian FROC theory preserving classical notions and formulas. For the details, please see the author's pre print:

Bayesian Models for ,,, for?? I forget my paper title .... :'-D. What the hell!? I forget,... My health is so bad to forget , .... I forget.

The author did not notice that the prototype is not a generative model. And hence the author revised the model so that the model is exactly generative model.

But the reason why the author remains the prototype model(prototype = TRUE) is that the convergence of MCMC sampling in case of MRMC is not good in the current model (prototype = FALSE) . Because it uses fractions \frac{p_1}{1-p_5-p_4-p_3-p_2} and which is very dangerous to numerical perspective. For example, if p_1 is very small, then the numerator and denominator of \frac{p_1}{1-p_5-p_4-p_3-p_2} is very small. Both of them is like 0.000000000000000123.... and such small number causes the non accurate results. So, sometimes, it occurs that \frac{p_1}{1-p_5-p_4-p_3-p_2} >1 which never occur in the theoretical perspective but unfortunately, in numerically occurs.

SO, now, the author try to avoid such phenomenon by using priors but it now does not success.

Here of course we interpret the terms such as N_L - H_5-H_4-H_3 as the remained targets after reader get hits. The author thinks it is another manner to do so like N_L -H_1-H2-H_3, but it does not be employed. Since the author thinks that the reader will assign his suspicious lesion location from high confidence level and in this view point the author thinks it should be considered that targets are found from the highest confidence suspicious location.

PreciseLogLikelihood

Logical, that is TRUE or FALSE. If PreciseLogLikelihood = TRUE(default), then Stan calculates the precise log likelihood with target formulation. If PreciseLogLikelihood = FALSE, then Stan calculates the log likelihood by dropping the constant terms in the likelihood function. In past, I distinct the stan file, one is target formulation and the another is not. But non-target formulation cause some Jacobian warning, thus I made all stanfile with target formulation when I uploaded to CRAN. Thus this variable is now meaningless.

DrawCurve

Logical: TRUE of FALSE. Whether the curve is to be drawn. TRUE or FALSE. If you want to draw the FROC and AFROC curves, then you set DrawCurve =TRUE, if not then DrawCurve =FALSE. The reason why the author make this variable DrawCurve is that it takes long time in MRMC case to draw curves, and thus Default value is FALSE in the case of MRMC data.

Drawcol

Logical: TRUE of FALSE. Whether the (A)FROC curve is to be drawn by using color of dark theme. The Default value is a TRUE.

summary

Logical: TRUE of FALSE. Whether to print the verbose summary. If TRUE then verbose summary is printed in the R console. If FALSE, the output is minimal. I regret, this variable name should be verbose.

mesh.for.drawing.curve

A positive large integer, indicating number of dots drawing the curves, Default =10000.

significantLevel

This is a number between 0 and 1. The results are shown if posterior probabilities are greater than this quantity.

new.imaging.device

Logical: TRUE of FALSE. If TRUE (default), then open a new device to draw curve. Using this we can draw curves in same plain by new.imaging.device=FALSE.

cha

A variable to be passed to the function rstan::sampling() of rstan in which it is named chains. A positive integer representing the number of chains generated by Hamiltonian Monte Carlo method, and, Default = 1.

ite

A variable to be passed to the function rstan::sampling() of rstan in which it is named iter. A positive integer representing the number of samples synthesized by Hamiltonian Monte Carlo method, and, Default = 1111

DrawFROCcurve

Logical: TRUE of FALSE. Whether the FROC curve is to be drawn.

DrawAFROCcurve

Logical: TRUE of FALSE. Whether the AFROC curve is to be drawn.

DrawCFPCTP

Logical: TRUE of FALSE. Whether the CFP and CTP points are to be drawn. CFP: Cumulative false positive per lesion (or image) which is also called False Positive Fraction (FPF). CTP Cumulative True Positive per lesion which is also called True Positive Fraction (TPF)..

dig

A variable to be passed to the function rstan::sampling() of rstan in which it is named ...??. A positive integer representing the Significant digits, used in stan Cancellation. Default = 5,

war

A variable to be passed to the function rstan::sampling() of rstan in which it is named warmup. A positive integer representing the Burn in period, which must be less than ite. Defaults to war = floor(ite/5)=10000/5=2000,

see

A variable to be passed to the function rstan::sampling() of rstan in which it is named seed. A positive integer representing seed used in stan, Default = 1234.

Null.Hypothesis

Logical, that is TRUE or FALSE. If Null.or.Alternative.Hypothesis = FALSE(default), then fit the alternative model to dataList (for details of models, see vignettes ). If Null.or.Alternative.Hypothesis = TRUE, then fit the null model to dataList.(for details of models, see vignettes ). Note that the null model is constructed under the null hypothesis that all modality are same observer performance ability. The alternative model is made under the assumption that all modality are not same. The reason why author creates this parameter is to test the null hypothesis by the Bayes factor. But the result of test is not desired one for me. Thus the test is under construction.

...

Additional arguments

Details

For details, see vignettes

P value calculation is improved by using generated quatinties block in Stan files. P value is the following. Appendix: p value

In order to evaluate the goodness of fit of our model to the data, we used the so-called the posterior predictive p value.

In the following, we use general conventional notations. Let y_{obs} be an observed dataset and f(y|θ) be a model (likelihood) for future dataset y. We denote a prior and a posterior distribution by π(θ) and π(θ|y) \propto f(y|θ)π(θ), respectively.

In our case, the data y is a pair of hits and false alarms; that is, y=(H_1,H_2, … H_C; F_1,F_2, … F_C) and θ = (z_1,dz_1,dz_2,…,dz_{C-1},μ, σ) . We define the χ^2 discrepancy (goodness of fit statistics) to validate that our model fit the data.

T(y,θ) := ∑_{c=1,.......,C} \biggr( \frac{\bigl(H_c-N_L\times p_c(θ) \bigr)^2}{N_L\times p_c(θ)}+ \frac{\bigl(F_c- q_{c}(θ) \times N_{X}\bigr)^2}{ q_{c}(θ) \times N_{X} }\biggr).

for a single reader and a single modality.

T(y,θ) := ∑_{r=1}^R ∑_{m=1}^M ∑_{c=1}^C \biggr( \frac{(H_{c,m,r}-N_L\times p_{c,m,r}(θ))^2}{N_L\times p_{c,m,r}(θ)}+ \frac{\bigl(F_c- q_{c}(θ) \times N_{X}\bigr)^2}{ q_{c}(θ) \times N_{X} }\biggr).

for multiple readers and multiple modalities.

Note that p_c and λ _{c} depend on θ.

In classical frequentist methods, the parameter θ is a fixed estimate, e.g., the maximal likelihood estimator. However, in a Bayesian context, the parameter is not deterministic. In the following, we show the p value in the Bayesian sense.

Let y_{obs} be an observed dataset (in an FROC context, it is hits and false alarms). Then, the so-called posterior predictive p value is defined by

p_value = \int \int \, dy\, dθ\, I( T(y,θ) > T(y_{obs},θ) )f(y|θ)π(θ|y_{obs})

In order to calculate the above integral, let θ_1,θ _2, ......., θ_i,.......,θ_I be samples from the posterior distribution of y_{obs} , namely,

θ_1 \sim π(....|y_{obs} ),

.......,

θ_i \sim π(....|y_{obs} ),

.......,

θ_I \sim π(....|y_{obs} ).

we obtain a sequence of models (likelihoods), i.e., f(....|θ_1),f(....|θ_2),......., f(....|θ_n). We then draw the samples y^1_1,....,y^i_j,.......,y^I_J , such that each y^i_j is a sample from the distribution whose density function is f(....|θ_i), namely,

y^1_1,.......,y^1_j,.......,y^1_J \sim f(....|θ_1),

.......,

y^i_1,.......,y^i_j,.......,y^i_J \sim f(....|θ_i),

.......,

y^I_1,.......,y^I_j,.......,y^I_J \sim f(....|θ_I).

Using the Monte Carlo integral twice, we calculate the integral of any function φ(y,θ).

\int \int \, dy\, dθ\, φ(y,θ)f(y|θ)π(θ|y_{obs})

\approx \int \, \frac{1}{I}∑_{i=1}^I φ(y,θ_i)f(y|θ_i)\,dy

\frac{1}{IJ}∑_{i=1}^I ∑_{j=1}^J φ(y^i_j,θ_i)

In particular, substituting φ(y,θ):= I( T(y,θ) > T(y_{obs},θ) ) into the above equation, we can approximate the posterior predictive p value.

p_value \approx \frac{1}{IJ}∑_i ∑_j I( T(y^i_j,θ_i) > T(y_{obs},θ_i) )

Value

An object of class stanfitExtended which is an inherited S4 class from the S4 class stanfit By rstan::sampling, the function fit the author's FROC Bayesian models to user data.

Use this fitted model object for sequential analysis, such as drawing the FROC curve and alternative FROC (AFROC) curves.

————————————————————————————————————

Notations and symbols for the Outputs of a single reader and a single modality case

—————————————————————————————————————-

In the following, the notations for estimated parameters are shown.

w A real number representing the lowest threshold of the Gaussian assumption (bi-normal assumption). so w=z[1].

dz[1] A real number representing the difference of the first and second threshold of the Gaussian assumption: dz[1] := z[2] - z[1].

dz[2] A real number representing the difference of the second and third threshold of the Gaussian assumption: dz[2] := z[3] - z[2].

dz[3] A real number representing the difference of the third and fourth threshold of the Gaussian assumption: dz[3] := z[4] - z[3].

...

m A real number representing the The mean of the Latent Gaussian distribution for diseased images. In TeX, it denoted by μ

v A positive real number representing the standard deviation of the Latent Gaussian distribution for diseased images.In TeX, it will be denoted by σ, not the square of σ.

p[1] A real number representing the Hit rate with confidence level 1.

p[2]A real number representing the Hit rate with confidence level 2.

p[3]A real number representing the Hit rate with confidence level 3.

...

l[1]A positive real number representing the (Cumulative) False positive rate with confidence level 1. In TeX, it will be denoted by λ_1.

l[2]A positive real number representing the (Cumulative) False positive rate with confidence level 2. In TeX, it will be denoted by λ_2.

l[3]A positive real number representing the (Cumulative) False positive rate with confidence level 3. In TeX, it will be denoted by λ_3.

l[4]A positive real number representing the (Cumulative) False positive rate with confidence level 4. In TeX, it will be denoted by λ_4.

...

dl[1]A positive real number representing the difference l[1] - l[2].

dl[2]A positive real number representing the difference l[2] - l[3].

dl[3]A positive real number representing the difference l[3] - l[4].

...

z[1] A real number representing the lowest threshold of the (Gaussian) bi-normal assumption.

z[2] A real number representing the 2nd threshold of the (Gaussian) bi normal assumption.

z[3] A real number representing the 3rd threshold of the (Gaussian) bi normal assumption.

z[4] A real number representing the fourth threshold of the (Gaussian) bi-normal assumption.

a A real number defined by m/v, please contact the author's paper for detail.

b A real number representing defined by 1/v, please contact the author's paper for detail.

A A positive real number between 0 and 1, representing AUC, i.e., the area under the alternative ROC curve.

lp__ The logarithmic likelihood of our model for your data.

—————————————————————————————————————

—- Notations and symbols: Outputs of Multiple Reader and Multiple Modality case ——

——————————————————————————————————————

w The lowest threshold of the Gaussian assumption (bi-normal assumption). so w=z[1].

dz[1] The difference of the first and second threshold of the Gaussian assumption.

dz[2] The difference of the second and third threshold of the Gaussian assumption.

dz[3] The difference of the third and fourth threshold of the Gaussian assumption.

...

mu The mean of the Latent Gaussian distribution for diseased images.

v The variance of the Latent Gaussian distribution for diseased images.

ppp[1,1,1] Hit rate with confidence level 1, modality 1, reader 1.

ppp[2,1,1] Hit rate with confidence level 2, modality 1, reader 1.

ppp[3,1,1] Hit rate with confidence level 3, modality 1, reader 1.

...

l[1] (Cumulative) False positive rate with confidence level 1.

l[2] (Cumulative) False positive rate with confidence level 2.

l[3] (Cumulative) False positive rate with confidence level 3.

l[4] (Cumulative) False positive rate with confidence level 4.

...

dl[1] This is defined by the difference l[1] - l[2].

dl[2] This is defined by the difference l[2] - l[3].

dl[3] This is defined by the difference l[3] - l[4].

...

z[1] The lowest threshold of the (Gaussian) bi-normal assumption.

z[2] The 2nd threshold of the (Gaussian) bi normal assumption.

z[3] The 3rd threshold of the (Gaussian) bi normal assumption.

z[4] The fourth threshold of the (Gaussian) bi-normal assumption.

aa This is defined by m/v, please see the author's paper for more detail.

bb This is defined by 1/v, please see the author's paper for more detail.

AA The area under alternative FROC curve associated to reader and modality.

A The area under alternative FROC curve associated to modality.

hyper_v Standard deviation of AA around A.

lp__ The logarithmic likelihood of our model for your data.

References

Bayesian Models for Free-response Receiver Operating Characteristic Analysis; Pre-print See vignettes

See Also

——— Before fitting: create a dataset

dataset_creator_new_version

Create an R object which represent user data.

create_dataset

Create an R object which represent user data.

——— Further sequential analysis: Plot curves

Using the result of fitting a Bayesian FROC model, we can go sequential analysis.

DrawCurves

for drawing free response ROC curves.

——— Further sequential analysis: Validation of the Model
——— R objects of example datasets from real world or fictitious:
dataList.Chakra.1

A list for an example dataset of a single reader and a single modality data. The word Chakra in the dataset name means that it appears in the paper of Chakraborty.

dataList.Chakra.2

A list for an example dataset of a single reader and a single modality data. The word Chakra in the dataset name means that it appears in the paper of Chakraborty.

dataList.Chakra.3

A list for an example dataset of a single reader and a single modality data. The word Chakra in the dataset name means that it appears in the paper of Chakraborty.

dataList.Chakra.4

A list for an example dataset of a single reader and a single modality data. The word Chakra in the dataset name means that it appears in the paper of Chakraborty.

dataList.high.ability

A list for an example dataset of a single reader and a single modality data

dataList.low.ability

A list for an example dataset of a single reader and a single modality data

dataList.Chakra.Web

A list for an example dataset of multiple readers and multiple modalities data. The word Chakra in the dataset name means that it appears in the paper of Chakraborty.

data.hier.ficitious

A list for an example dataset of multiple readers and multiple modalities data

dataList.High

A list for an example dataset of a single reader and a single modality data whose AUC is high.

dataList.Low

A list for an example dataset of a single reader and a single modality data whose AUC is low.

data.bad.fit

A list for an example dataset of a single reader and a single modality data whose fitting is bad, that is chi square is very large. However the MCMC convergence criterion is satisfied with very high quality. Thus the good MCMC convergence does not mean the model is correct. So, to fit a model to this data, we should change the latent Gaussian and differential logarithmic Gaussian to more appropriate distributions for hit and false alarm rate. In theoretically perspective, there is no a a prior distribution for hit and false alarm rate. So, if we encounter not good fitting data, then we should change the model, and such change will occur in the latent distributions. The reason why the author saved this data is to show that our model is not unique nor good and gives a future research directions. To tell the truth the author is not interested the FROC theory. My background is mathematics, geometry, pure mathematics. So, I want to go back to my home ground. This program are made to show my skill for programming or my ability. But, now, I do not think to get job. I want to go back mathematics. Soon, my paper is published which is related Gromov Hausdorff topology. Of course, I will publish this package's theory soon. Please wait.

d,dd ,ddd ,dddd ,ddddd,dddddd,ddddddd

The other datasets, the author like these datasets because name is very simple.

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
## Not run: 
#========================================================================================
#                               The 1-st example
#========================================================================================
#
#
#                  Making FROC Data and Fitting a Model to the data
#
#                                Notations
#
#            h = hits = TP = True Positives
#            f = False alarms = FP = False Positives
#
#
#========================================================================================
#            1)             Build a data-set
#========================================================================================

                BayesianFROC:::clearWorkspace()

# For a single reader and a single modality  case.

    dat <- list(c=c(3,2,1),    #     Confidence level. Note that c is ignored.
            h=c(97,32,31), #     Number of hits for each confidence level
            f=c(1,14,74),  #     Number of false alarms for each confidence level

            NL=259,        #     Number of lesions
            NI=57,         #     Number of images
            C=3)           #     Number of confidence level


      if (interactive()){   viewdata(dat)}

#  where,
#      c denotes confidence level, i.e., rating of reader.
#                3 = Definitely diseased,
#                2 = subtle,.. diseased
#                1 = very subtle
#      h denotes number of hits (True Positives: TP) for each confidence level,
#      f denotes number of false alarms (False Positives: FP) for each confidence level,
#      NL denotes number of lesions,
#      NI denotes number of images,


# For example, in the above example data,
#  the number of hits with confidence level 3 is 97,
#  the number of hits with confidence level 2 is 32,
#  the number of hits with confidence level 1 is 31,

#  the number of false alarms with confidence level 3 is 1,
#  the number of false alarms with confidence level 2 is 14,
#  the number of false alarms with confidence level 1 is 74,


#========================================================================================
#                         2)       Fit an FROC model to the above dataset.
#========================================================================================






          fit <-   fit_Bayesian_FROC(
                           dat,       # dataset
                           ite = 111,  #To run in time <5s.
                           cha = 1,      # number of chains, it is better more large.
                           summary = FALSE
                               )



# The return value "fit" is an S4 object of class "stanfitExtended" which is inherited
# from the S4 class "stanfit".


#========================================================================================
#             3)  Change the S4 class of fitted model object
# Change the S4 class from "stanfitExtended" to "stanfit" to apply other packages.
# The fitted model object of class "stanfit" is  widely available.
# For example the package ggmcmc, rstan,  shinystan::launch_shinystan(stanfit_object)
# Thus, to use such packages, we get back the inherited class into "stanfit" as follows:

# Changing the class from stanfitExtended to stanfit,
# we can apply other pakcage's functions to the resulting object.

#========================================================================================




                   fit.stan   <-   methods::as(fit,"stanfit")



# Then, return value "fit.stan" is no longer an S4 object of class "stanfitExtended" but
# the S4 object of class "stanfit" which is widely adequate for many packages.


#========================================================================================
#             3.1)  Apply the functions for the class stanfit
#========================================================================================

grDevices::dev.new();rstan::stan_hist(fit.stan, bins=33,pars = c("A"))
grDevices::dev.new();rstan::stan_hist(fit.stan, bins=22,pars = c("A"))
grDevices::dev.new();rstan::stan_hist(fit.stan, bins=11,pars = c("A"))

grDevices::dev.off()

# I am not sure why the above stan_hist also works for the new S4 class "stanfitExtended"

# Get pipe operator


          #       `%>%`    <-    utils::getFromNamespace("%>%", "magrittr")



# Plot about MCMC samples of parameter name "A", representing AUC






# The author does not think the inherited class "stanfitExtended" is good,
# cuz the size of object is very redundant and large,
# which caused by the fact that inherited class contains plot data for FROC curve.
# To show the difference of size for the fitted model object of class
# stanfitExtended and stanfit, we execute the following code;


   size_of_return_value(fit) - size_of_return_value(methods::as(fit,"stanfit"))







#4) Using the S4 object fit, we can go further step, such as calculation of the
# Chisquare and the p value as the Bayesian sense for testing the goodness of fit.
# I think p value has problems that it relies on the sample size monotonically.
# But it is widely used, thus I hate it but I implement the p value.




#========================================================================================
#                                   REMARK
#========================================================================================

#
# Should not write the above data as follows:

# MANNER (A)   dat <- list(c=c(1,2,3),h=c(31,32,97),f=c(74,14,1),NL=259,NI=57,C=3)


# Even if user writes data in the above MANNER (A),
# the program interprets it as the following MANNER (B);

# MANNER (B)   dat <- list(c=c(3,2,1),h=c(31,32,97),f=c(74,14,1),NL=259,NI=57,C=3)

# Because the vector c is ignored in the program,
# and it is generated by the code rep(C:1) automatically  in the internal of the function.
# So, we can omit the vector c from the list.



#This package is very rigid format, so please be sure that data-format is
#exactly same to the format in this package.
#More precisely, the confidence level vector should be denoted rep(C:1) (Not rep(1:C)).
#  Note that confidence level vector c  should not be specified.
#   If specified, will be ignored ,
#  since it is created by   c <-c(rep(C:1)) in the program and
#  do not refer from user input confidence level vector,
#  where C is the highest number of confidence levels.
# I regret this order, this order is made when I start, so I was very beginner,
# but it is too late to fix,...tooooooo late.











#========================================================================================
#                               The 2-nd example
#========================================================================================
#

#    (1)First, we prepare the data from this package.


                 dat  <- BayesianFROC::dataList.Chakra.1


#    (2)Second, we run fit_Bayesian_FROC() in which the rstan::stan() is implemented.
#    with data named "dat"  and the author's Bayesian model.


                 fit <-  fit_Bayesian_FROC(dat,
                           ite = 111  #To run in time <5s.
                           )






#   Now, we get the object named "fit" which is an S4 object of class stanfitExtended.

# << Minor Comments>>
#  More precisely, this is an S4 object of some inherited class (named stanfitExtended)
#  which is extended using stan's S4 class named "stanfit".


 fit.stan <- methods::as(fit,"stanfit")
#  Using the output "fit.stan",

#  we can use the functions in the "rstan" package, for example, as follows;

  grDevices::dev.new();
         rstan::stan_trace(fit.stan, pars = c("A"))# stochastic process of a posterior estimate
         rstan::stan_hist(fit.stan, pars = c("A")) # Histogram of a posterior estimate
         rstan::stan_rhat(fit.stan, pars = c("A")) # Histogram of rhat for all parameters
         rstan::summary(fit.stan, pars = c("A"))   # summary of fit.stan by rstan
 grDevices::dev.off()





#========================================================================================
#                               The 3-rd example
#========================================================================================

#    Fit a model to a hand made data

#     1) Build the data for a single reader and a single modality  case.

   dat <- list(
            c=c(3,2,1),    #  Confidence level, which is ignored.
            h=c(97,32,31), #  Number of hits for each confidence level
            f=c(1,14,74),  #  Number of false alarms for each confidence level

            NL=259,       #   Number of lesions
            NI=57,        #   Number of images
            C=3)          #   Number of confidence level




#  where,
#        c denotes confidence level, , each components indicates that
#                3 = Definitely lesion,
#                2 = subtle,
#                1 = very subtle
#          That is the high number indicates the high confidence level.
#        h denotes number of hits
#          (True Positives: TP) for each confidence level,
#        f denotes number of false alarms
#          (False Positives: FP) for each confidence level,
#        NL denotes number of lesions,
#        NI denotes number of images,


#     2) Fit  and draw FROC and AFROC curves.




           fit <-   fit_Bayesian_FROC(dat, DrawCurve = TRUE)



# (( REMARK ))
#           Changing the hits and false alarms denoted by h and  f
#           in the above dataset denoted by dat,
#           user can fit a model to various datasets and draw corresponding FROC curves.
#           Enjoy drawing the curves for various datasets in case of
#           a single reader and a single modality data




#========================================================================================
#  For Prior and Bayesian Update:

#            Calculates a posterior mean  and  variance

#                                                         for each parameter
#========================================================================================


# Mean values of posterior samples are used as a  point estimates,  and
# Although the variance of posteriors receives less attention,
# but to make a prior, we will need the it.
# For, example, if we assume that model parameter m has prior distributed by
# Gaussian, then we have to know the mean and variance to characterize prior.


                e <- rstan::extract(fit)



#  model parameter m and v is a number,
#  indicating the mean  and variance of signal distribution, respectively.

                stats::var(e$m)

                mean(e$m)




                stats::var(e$v)

                mean(e$v)



# The model parameter z or dz is a vector, and thus we execute the following;

#   z = (   z[1],  z[2],  z[3]  )

#  dz = (   z[2]-z[1],     z[3]-z[2]   )



# `Posterior mean of posterior MCMC samples for parameter z and dz


              apply(e$dz, 2, mean)

              apply(e$z, 2, mean)






# `Posterior variance of posterior MCMC samples for parameter z and dz



              apply(e$dz, 2, var)

              apply(e$z, 2, var)






              apply(e$dl, 2, mean)

              apply(e$l, 2, mean)

              apply(e$p, 2, mean)

              apply(e$p, 2, var)












# Revised 2019 Sept 6




#========================================================================================
#                               The 4-th example
#========================================================================================
#


## Only run examples in interactive R sessions
if (interactive()) {

#         1) Build the data interactively,

                      dataList <-  create_dataset()

#Now, as as a return value of create_dataset(), we get the FROC data (list) named dataList.

#        2) Fit an MRMC or srsc FROC model.

                      fit <-  fit_Bayesian_FROC(dataList)


}## Only run examples in interactive R sessions


#========================================================================================
#                               The 5-th example
#========================================================================================
# Comparison of the posterior probability for AUC


# In the following, we calculate the probability of the events that
# the AUC of some modality is greater than the AUC of another modality.


#========================================================================================
#     Posterior Probability for some events of AUCs by using posterior MCMC samples
#========================================================================================


# This example shows how to use the stanfit (stanfit.Extended) object.
# Using stanfit object, we can extract posterior samples and using these samples,
# we can calculate the posterior probability of research questions.



    fit <- fit_Bayesian_FROC(dataList.Chakra.Web.orderd,ite = 111,summary =FALSE)



#    For example, we shall show the code to compute the posterior probability of the ever
#    that  the AUC of modality 1 is larger than that of modality 2:



                              e <- extract(fit)


# Then, the MCMC samples are extracted in the object "e" for all parameters.
# From this, e.g., AUC can be extracted by the code e$A that is a two dimensional array.
# The first component of e$A indicates the ID of MCMC samples and
# the second component indicates the modality ID.

# For example, the code e$A[,1] means the vector of MCMC samples of the 1 st modality.
# For example, the code e$A[,2] means the vector of MCMC samples of the 2 nd modality.
# For example, the code e$A[,3] means the vector of MCMC samples of the 3 rd modality.
#    To calculate the posterior probability of the event
#    that the AUC of modality 1 is larger than that of modality 2,
#    we execute the following R script:

                        mean(e$A[,1] > e$A[,2])


#    Similarly, to compute the posterior probability of the event that
#     the AUC of modality 1 is larger  than  that of modality 3:

                        mean(e$A[,1] > e$A[,3])


#    Similarly, to compute the posterior probability of the event that
#     the AUC of modality 1 is larger  than  that of modality 4:

                        mean(e$A[,1] > e$A[,4])


#    Similarly, to compute the posterior probability of the event that
#     the AUC of modality 1 is larger  than  that of modality 5:

                        mean(e$A[,1] > e$A[,5])


#    Similarly, to compute the posterior probability of the event that
#     the AUC of modality 1 is larger  than  that of modality 5 at least 0.01


                        mean(e$A[,1] > e$A[,5]+0.01)


#      Similarly,

                 mean( e$A[,1] > e$A[,5] + 0.01 )
                 mean( e$A[,1] > e$A[,5] + 0.02 )
                 mean( e$A[,1] > e$A[,5] + 0.03 )
                 mean( e$A[,1] > e$A[,5] + 0.04 )
                 mean( e$A[,1] > e$A[,5] + 0.05 )
                 mean( e$A[,1] > e$A[,5] + 0.06 )
                 mean( e$A[,1] > e$A[,5] + 0.07 )
                 mean( e$A[,1] > e$A[,5] + 0.08 )



# Since any posterior distribution tends to the Dirac measure whose center is
# true parameter under the assumption that the model is correct in the sense that the
# true distribution is belongs to a family of models.
# Thus using this procedure, we will get
# the true parameter if any more large sample size we can take.


#      Close the graphic device to avoid errors in R CMD check.

                      Close_all_graphic_devices()












#========================================================================================
#                               The 6-th Example for MRMC data
#========================================================================================



# To draw FROC curves for each modality and each reader, the author provides codes.
# First, we make a fitted object of class stanfitExtended as following manner.


     fit <- fit_Bayesian_FROC( ite  = 1111,
                                cha = 1,
                            summary = FALSE,
                   Null.Hypothesis  = FALSE,
                           dataList = dd # This is a MRMC dataset.
                              )

# Using this fitted model object called fit, we can draw FROC curves for the
# 1-st modality as following manner:


DrawCurves(
# This is a fitted model object
           fit,

# Here, the modality is specified
           modalityID = 1,

# Reader is specified as 1,2,3,4
           readerID = 1:4,

# If TRUE, the new imaging device is created and curves are drawn on it.
            new.imaging.device = TRUE
            )



# The next codes are quite same, except modality ID and new.imaging.device
# The code that "new.imaging.device = F" means that the curves are drawn using
# the previous imaging device to plot the 1-st and 2-nd modality curves draw in the same
# Plot plain. Drawing in different curves in same plain, we can compare the curve
# of modality. Of course, the interpretation of FROC curve is the ordinal ROC curve,
# that is,
# if curve is upper then the observer performance with its modality is more greater.
# So, please enjoy drawing curves.

           DrawCurves(fit,modalityID = 2,readerID = 1:4, new.imaging.device = FALSE)
           DrawCurves(fit,modalityID = 3,readerID = 1:4, new.imaging.device = FALSE)
           DrawCurves(fit,modalityID = 4,readerID = 1:4, new.imaging.device = FALSE)
           DrawCurves(fit,modalityID = 5,readerID = 1:4, new.imaging.device = FALSE)


                      Close_all_graphic_devices()

#========================================================================================
#                               The 7-th example NON-CONVERGENT CASE 2019 OCT.
#========================================================================================






ff <- fit_Bayesian_FROC( ite  = 1111,  cha = 1, summary = TRUE, dataList = ddd )
#'








dat <- list(
  c=c(3,2,1),    #Confidence level
  h=c(73703933,15661264,12360003), #Number of hits for each confidence level
  f=c(1738825,53666125 , 254965774),  #Number of false alarms for each confidence level

  NL=100000000,       #Number of lesions
  NI=200000000,        #Number of images
  C=3)          #Number of confidence level





# From the examples of the function mu_truth_creator_for_many_readers_MRMC_data()
#========================================================================================
#                  Large number of readers cause non-convergence
#========================================================================================


  v <- v_truth_creator_for_many_readers_MRMC_data(M=4,Q=6)
m <- mu_truth_creator_for_many_readers_MRMC_data(M=4,Q=6)
d <-create_dataList_MRMC(mu.truth = m,v.truth = v)
#fit <- fit_Bayesian_FROC( ite  = 111,  cha = 1, summary = TRUE, dataList = d )

plot_FPF_and_TPF_from_a_dataset(d)




#========================================================================================
#                             convergence
#========================================================================================



 v  <- v_truth_creator_for_many_readers_MRMC_data(M=2,Q=21)
 m  <- mu_truth_creator_for_many_readers_MRMC_data(M=2,Q=21)
 d  <- create_dataList_MRMC(mu.truth = m,v.truth = v)
fit <- fit_Bayesian_FROC( ite  = 200,  cha = 1, summary = TRUE, dataList = d)

 plot_FPF_TPF_via_dataframe_with_split_factor(d)
 plot_empirical_FROC_curves(d,readerID = 1:21)
#========================================================================================
#                            non-convergence
#========================================================================================



v  <- v_truth_creator_for_many_readers_MRMC_data(M=5,Q=6)
 m  <- mu_truth_creator_for_many_readers_MRMC_data(M=5,Q=6)
 d  <- create_dataList_MRMC(mu.truth = m,v.truth = v)
#fit <- fit_Bayesian_FROC( ite  = 111,  cha = 1, summary = TRUE, dataList = d)



#========================================================================================
#                           convergence
#========================================================================================


v  <- v_truth_creator_for_many_readers_MRMC_data(M=1,Q=36)
m  <- mu_truth_creator_for_many_readers_MRMC_data(M=1,Q=36)
d  <- create_dataList_MRMC(mu.truth = m,v.truth = v)
#fit <- fit_Bayesian_FROC(ite = 111, cha = 1,summary = TRUE, dataList = d, see = 123)










#========================================================================================
#                            non-convergence
#========================================================================================


v  <- v_truth_creator_for_many_readers_MRMC_data(M=1,Q=37)
m  <- mu_truth_creator_for_many_readers_MRMC_data(M=1,Q=37)
d  <- create_dataList_MRMC(mu.truth = m,v.truth = v)
#fit <- fit_Bayesian_FROC( ite  = 111,  cha = 1, summary = TRUE, dataList = d)





#========================================================================================
#                            convergence A single modality and 11 readers
#========================================================================================

v <- v_truth_creator_for_many_readers_MRMC_data(M=1,Q=11)
m <- mu_truth_creator_for_many_readers_MRMC_data(M=1,Q=11)
d <- create_dataList_MRMC(mu.truth = m,v.truth = v)
 fit <- fit_Bayesian_FROC( ite = 111,
                          cha = 1,
                      summary = TRUE,
                     dataList = d,
                          see = 123455)

DrawCurves( summary = FALSE,
         modalityID = c(1:fit@dataList$M),
            readerID = c(1:fit@dataList$Q),
            StanS4class = fit  )





#========================================================================================
#                            convergence A single modality and 17 readers
#========================================================================================



v <- v_truth_creator_for_many_readers_MRMC_data(M=1,Q=17)
m <- mu_truth_creator_for_many_readers_MRMC_data(M=1,Q=17)
d <- create_dataList_MRMC(mu.truth = m,v.truth = v)
fit <- fit_Bayesian_FROC( ite = 1111, cha = 1, summary = TRUE, dataList = d,see = 123455)


DrawCurves( summary = FALSE,   modalityID = c(1:fit@dataList$M),
            readerID = c(1:fit@dataList$Q),fit  )


DrawCurves( summary = FALSE,   modalityID = 1,
            readerID = c(8,9),fit  )
#
## For readerID 8,9, this model is bad
#
Close_all_graphic_devices()





#========================================================================================
#                            convergence 37 readers, 1 modality
#========================================================================================



v  <- v_truth_creator_for_many_readers_MRMC_data(M=1,Q=37)
m  <- mu_truth_creator_for_many_readers_MRMC_data(M=1,Q=37)
d  <- create_dataList_MRMC(mu.truth = m,v.truth = v)
fit <- fit_Bayesian_FROC(see = 2345678, ite  = 1111,  cha = 1, summary = TRUE, dataList = d)


DrawCurves( summary = FALSE,   modalityID = c(1:fit@dataList$M),
            readerID = c(1:fit@dataList$Q),fit  )


DrawCurves( summary = FALSE,   modalityID = 1,
            readerID = c(8,9),fit  )

# In the following, consider two readers whose ID are 8 and 15, respectively.
# Obviously, one of them will have high performamce than the other,
# however,
# Sometimes, the FROC curve does not reflect it,
# Namely, one of the FROC curve is upper than the other
# even if the FPF and TPF are not.... WHY???




DrawCurves( summary = FALSE,   modalityID = 1,
            readerID = c(8,15),fit  )

Close_all_graphic_devices()



Close_all_graphic_devices()

## End(Not run)# dontrun

BayesianFROC documentation built on Jan. 23, 2022, 9:06 a.m.