error_srsc: Validation via replicated datasets from a model at a given...

Description Usage Arguments Details Value Examples

View source: R/validation_error_srsc.R

Description

Print for a given true parameter, a errors of estimates from replicated dataset.

Also print a standard error which is the variance of estimates.

Suppose that θ_0 is a given true model parameter with a given number of images N_I and a given number of lesions N_L, specified by user.

(I)
(I.1) Synthesize a collection of dataset D_k (k=1,2,...,K) from a likelihood (model) at a given parameter θ_0, namely

D_k \sim likelihood( θ_0).

(I.2) Replicates K models fitted to each dataset D_k (k=1,2,...,K), namely, draw MCMC samples \{ θ_i (D_k);i=1,...,I\} from each posterior of the dataset D_k, namely

θ _i(D_k) \sim π(|D_k).

(I.3) Calculate posterior means for the set of data D_k (k=1,2,...,K), namely

\bar{θ}(D_k) := \frac{1}{I} ∑_i θ_i(D_k) .

(I.4) Calculates error for each dataset D_k

ε_k:= Truth - estimates = θ_0 - \bar{θ}(D_k).

(II) Calculates mean of errors over all datasets D_k (k=1,2,...,K)

mean of errors \bar{ε}(θ_0,N_I,N_L)= \frac{1}{K} ∑ ε_k .

NOTE

We note that if a fitted model does not converge,( namely R hat is far from one), then it is omiited from this calculation.

(III) Calculates mean of errors for various number of lesions and images

mean of errors \bar{ε}(θ_0,N_I,N_L)

For example, if (N_I^1,N_L^1),(N_I^2,N_L^2),(N_I^3,N_L^3),...,(N_I^m,N_L^m), then \bar{ε}(θ_0,N_I^1,N_L^1), \bar{ε}(θ_0,N_I^2,N_L^2), \bar{ε}(θ_0,N_I^3,N_L^3),..., \bar{ε}(θ_0,N_I^m,N_L^m) are calculated.

To obtain precise error, The number of replicated fitted models (denoted by K) should be large enough. If K is small, then it causes a bias. K = replicate.datset: a variable of the function error_srsc.

Running this function, we can see that the error \bar{ε}(θ_0,N_I,N_L) decreases monotonically as a given number of images N_I or a given number of lesions N_L increases.

Also, the scale of error also will be found. Thus this function can show how our estimates are correct. Scale of error differs for each componenet of model parameters.

Revised 2019 August 28

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
error_srsc(
  NLvector = c(100L, 10000L, 1000000L),
  ratio = 2,
  replicate.datset = 3,
  ModifiedPoisson = FALSE,
  mean.truth = 0.6,
  sd.truth = 5.3,
  z.truth = c(-0.8, 0.7, 2.38),
  ite = 2222,
  cha = 1,
  verbose = FALSE
)

Arguments

NLvector

A vector of positive integers, indicating a collection of numbers of Lesions.

ratio

A positive rational number, with which Number of Images is determined by the formula: (number of images) = ratio times (numbser of lesions). Note that in calculation, it rounds ratio * NLvector to an integer.

replicate.datset

A Number indicate that how many you replicate dataset from user's specified dataset.

ModifiedPoisson

Logical, that is TRUE or FALSE.

If ModifiedPoisson = TRUE, then Poisson rate of false alarm is calculated per lesion, and a model is fitted so that the FROC curve is an expected curve of points consisting of the pairs of TPF per lesion and FPF per lesion.

Similarly,

If ModifiedPoisson = TRUE, then Poisson rate of false alarm is calculated per image, and a model is fitted so that the FROC curve is an expected curve of points consisting of the pair of TPF per lesion and FPF per image.

For more details, see the author's paper in which I explained per image and per lesion. (for details of models, see vignettes , now, it is omiited from this package, because the size of vignettes are large.)

If ModifiedPoisson = TRUE, then the False Positive Fraction (FPF) is defined as follows (F_c denotes the number of false alarms with confidence level c )

\frac{F_1+F_2+F_3+F_4+F_5}{N_L},

\frac{F_2+F_3+F_4+F_5}{N_L},

\frac{F_3+F_4+F_5}{N_L},

\frac{F_4+F_5}{N_L},

\frac{F_5}{N_L},

where N_L is a number of lesions (signal). To emphasize its denominator N_L, we also call it the False Positive Fraction (FPF) per lesion.

On the other hand,

if ModifiedPoisson = FALSE (Default), then False Positive Fraction (FPF) is given by

\frac{F_1+F_2+F_3+F_4+F_5}{N_I},

\frac{F_2+F_3+F_4+F_5}{N_I},

\frac{F_3+F_4+F_5}{N_I},

\frac{F_4+F_5}{N_I},

\frac{F_5}{N_I},

where N_I is the number of images (trial). To emphasize its denominator N_I, we also call it the False Positive Fraction (FPF) per image.

The model is fitted so that the estimated FROC curve can be ragraded as the expected pairs of FPF per image and TPF per lesion (ModifiedPoisson = FALSE )

or as the expected pairs of FPF per image and TPF per lesion (ModifiedPoisson = TRUE)

If ModifiedPoisson = TRUE, then FROC curve means the expected pair of FPF per lesion and TPF.

On the other hand, if ModifiedPoisson = FALSE, then FROC curve means the expected pair of FPF per image and TPF.

So,data of FPF and TPF are changed thus, a fitted model is also changed whether ModifiedPoisson = TRUE or FALSE. In traditional FROC analysis, it uses only per images (trial). Since we can divide one image into two images or more images, number of trial is not important. And more important is per signal. So, the author also developed FROC theory to consider FROC analysis under per signal. One can see that the FROC curve is rigid with respect to change of a number of images, so, it does not matter whether ModifiedPoisson = TRUE or FALSE. This rigidity of curves means that the number of images is redundant parameter for the FROC trial and thus the author try to exclude it.

Revised 2019 Dec 8 Revised 2019 Nov 25 Revised 2019 August 28

mean.truth

This is a parameter of the latent Gaussian assumption for the noise distribution.

sd.truth

This is a parameter of the latent Gaussian assumption for the noise distribution.

z.truth

This is a parameter of the latent Gaussian assumption for the noise distribution.

ite

A variable to be passed to the function rstan::sampling() of rstan in which it is named iter. A positive integer representing the number of samples synthesized by Hamiltonian Monte Carlo method, and, Default = 1111

cha

A variable to be passed to the function rstan::sampling() of rstan in which it is named chains. A positive integer representing the number of chains generated by Hamiltonian Monte Carlo method, and, Default = 1.

verbose

A logical, if TRUE, then the redundant summary is printed in R console. If FALSE, it suppresses output from this function.

Details

In Bayesian inference, if sample size is large, then posterior tends to the Dirac measure. So, the error and variance of estimates should be tends to zero as sample size tends to infinity.

This function check this phenomenen.

If model has problem, then it contains some non-decreasing vias with respect to sample size.

Revised 2019 Nov 1

Provides a reliability of our posterior mean estimates. Using this function, we can find what digit makes sence.

In the real world, the data for modality comparison or observer performan evaluation is 100 images or 200 images. In such scale data, any estimate of AUC will contain error at most 0.0113.... So, the value of AUC should round in 0.XXX and not 0.XXXX or 0.XXXXX or more. Since error is 0.00113... and hence 4 digit or more digit is meaningless. In such manner, we can analyize the errors.

We note that if we increase the number of images or lesinons, the errors decrease.

For example, if we use 20000 images in FROC trial, then the error of AUC will be 0.0005... and thus, and so on. Thus large number of images gives us more reliable AUC. However the radiologist cannot read such large (20000) images.

Thus, the error will be 0.00113...

If the number of images are given before hand and moreover if we obtains the estimates, then we can run this function using these two, we can find the estimated errors by simulation. Of course, the esimates is not the truth, but roughly speaking, if we assume that the estimates is not so far from truth, and the error analysis is rigid with respect to changing the truth, then we can say using estimates as truth, the result of this error analysis can be regarded as an actual error.

I want to go home. Unfortunatly, my house is ...

Value

Replicated datasets, estimates, errors,...etc I made this program 1 years ago? and now I forget ... the precise return values. When I see today, 2019 August. It retains too many return values to explain all of them.

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
## Not run: 
#========================================================================================
#            0)            0-th example
#========================================================================================


   datasets <-error_srsc(
               NLvector = c(100,10000,1000000),
               ite = 2222
               )


 # By the following, we can extract only datasets whose
 # model has converged.
   datasets$convergent.dataList.as.dataframe




#========================================================================================
#            1)            1-st example
#========================================================================================
# Long width  is required in  R console.



datasets <-error_srsc(NLvector = c(
  50L,
  111L,
  11111L
  ),
  # NIvector,
  ratio=2,
  replicate.datset =3,
  ModifiedPoisson = FALSE,
  mean.truth=0.6,
  sd.truth=5.3,
  z.truth =c(-0.8,0.7,2.38),
  ite =2222
)



#========================================================================================
#            2)             Plot the error of AUC with respect to  NI
#========================================================================================


a <-error_srsc(NLvector = c(
  33L,
  50L,
  111L,
  11111L
  ),
  # NIvector,
  ratio=2,
  replicate.datset =3,
  ModifiedPoisson = FALSE,
  mean.truth=0.6,
  sd.truth=5.3,
  z.truth =c(-0.8,0.7,2.38),
  ite =2222
)






      aa <- a$Bias.for.various.NL


      error.of.AUC <-  aa[8,]
      y <- subset(aa[8,], select  = 2:length(aa[8,]))
      y <- as.numeric(y)
      y <- abs(y)
      upper_y <- max(y)
      lower_y <- min(y)

      x <- 1:length(y)



      plot(x,y, ylim=c(lower_y, upper_y))

#  From this plot, we cannot see whether the error has decreased or not.
#  Thus, we replot with the log y-axis, the we will see that the error
#  has decreased with respect to number of images and lesions.

      ggplot(data.frame(x=x,y=y), aes(x = x, y = y)) +
           geom_line() +
           geom_point() +
           scale_y_log10()

# Revised 2019 Sept 25


# General print of log scale
df<-data.frame(x=c(10,100,1000,10,100,1000),
               y=c(1100,220000,33000000,1300,240000,36000000),
               group=c("1","1","1","2","2","2")
)

ggplot2::ggplot(df, aes(x = x, y = y, shape = group)) +
  ggplot2::geom_line(position = position_dodge(0.2)) +           # Dodge lines by 0.2
  ggplot2::geom_point(position = position_dodge(0.2), size = 4)+  # Dodge points by 0.2
  ggplot2::scale_y_log10()+
  ggplot2::scale_x_log10()


#========================================================================================
#    2)   Add other param into plot plain of the error of AUC with respect to  NI
#========================================================================================











a <-error_srsc(NLvector = c(
  111L,
  11111L
  ),
  # NIvector,
  ratio=2,
  replicate.datset =3,
  ModifiedPoisson = FALSE,
  mean.truth=0.6,
  sd.truth=5.3,
  z.truth =c(-0.8,0.7,2.38),
  ite =2222
)
      aa <- a$Bias.for.various.NL


      error.of.AUC <-  aa[8,]
      y1 <- subset(aa[8,], select  = 2:length(aa[8,]))
      y1 <- as.numeric(y1)
      y1 <- abs(y1)

      LLL <-length(y1)

      y2 <- subset(aa[7,], select  = 2:length(aa[7,]))
      y2 <- as.numeric(y2)
      y2 <- abs(y2)

      y <- c(y1,y2)


      upper_y <- max(y)
      lower_y <- min(y)

    group <- rep(seq(1,2,1),1 , each=LLL)
    x <-  rep(seq(1,LLL,1),2 , each=1)
    group <-  as.character(group)
   df <-  data.frame(x=x,y=y,group=group)


                ggplot2::ggplot(df, aes(x = x, y = y, shape = group)) +
ggplot2::geom_line(position = position_dodge(0.2)) +           # Dodge lines by 0.2
  ggplot2::geom_point(position = position_dodge(0.2), size = 4)+  # Dodge points by 0.2
           ggplot2::scale_y_log10()
         # ggplot2::scale_x_log10()









#========================================================================================
#          Confidence level = 4
#========================================================================================





datasets <-error_srsc(NLvector = c(
  111L,
  11111L
  ),
  # NIvector,
  ratio=2,
  replicate.datset =3,
  ModifiedPoisson = FALSE,
  mean.truth=-0.22,
  sd.truth=5.72,
  z.truth =c(-0.46,-0.20,0.30,1.16),
  ite =2222
)





 error_srsc_variance_visualization(datasets)

#  The parameter of model is 7 in which the ggplot2 fails with the following warning:


# The shape palette can deal with a maximum of 6 discrete values because more than 6
# becomes difficult to
# discriminate; you have 7. Consider specifying shapes manually if you must have them.


#========================================================================================
#     NaN ... why? 2021 Dec
#========================================================================================

fits <- validation.dataset_srsc()

f <-fits$fit[[2]]
rstan::extract(f)$dl
sum(rstan::extract(f)$dl)
Is.nan.in.MCMCsamples <- as.logical(!prod(!is.nan(rstan::extract(f)$dl)))
rstan::extract(f)$A[525]
a<-rstan::extract(f)$a
b<-rstan::extract(f)$b



Phi(  a[525]/sqrt(b[525]^2+1)  )
a[525]/sqrt(b[525]^2+1)
Phi(  a/sqrt(b^2+1)  )
x<-rstan::extract(f)$dl[2]

a<-rstan::extract(f)$a
b<-rstan::extract(f)$b

a/(b^2+1)
Phi(a/(b^2+1))
mean( Phi(a/(b^2+1))  )

#'



## End(Not run)# dontrun

BayesianFROC documentation built on Jan. 23, 2022, 9:06 a.m.