View source: R/estimateMetaI.R
estimateMetaI | R Documentation |
estimateMetaI
estimates meta-I
, an information-theoretic
measure of metacognitive sensitivity proposed by Dayan (2023), as well as
similar derived measures, including meta-I_{1}^{r}
and Meta-I_{2}^{r}
.
These are different normalizations of meta-I
:
Meta-I_{1}^{r}
normalizes by the meta-I
that would be
expected from an underlying normal distribution with the same
sensitivity.
Meta-I_{1}^{r\prime}
is a variant of meta-I_{1}^{r}
not discussed by Dayan
(2023) which normalizes by the meta-I
that would be expected from
an underlying normal distribution with the same accuracy (this is
similar to the sensitivity approach but without considering variable
thresholds).
Meta-I_{2}^{r}
normalizes by the maximum amount of meta-I
which would be reached if all uncertainty about the stimulus was removed.
RMI
normalizes meta-I
by the range of its possible
values and therefore scales between 0 and 1. RMI is a novel measure not discussed by Dayan (2023).
All measures can be calculated with a bias-reduced variant for which the observed frequencies are taken as underlying probability distribution to estimate the sampling bias. The estimated bias is then subtracted from the initial measures. This approach uses Monte-Carlo simulations and is therefore not deterministic (values can vary from one evaluation of the function to the next). However, this is a simple way to reduce the bias inherent in these measures.
estimateMetaI(data, bias_reduction = TRUE)
data |
a
|
bias_reduction |
|
It is assumed that a classifier (possibly a human being performing a discrimination task)
or an algorithmic classifier in a classification application,
makes a binary prediction R
about a true state of the world S
and gives a confidence rating C
.
Meta-I
is defined as the mutual information between the confidence and
accuracy and is calculated as the transmitted information minus the
minimal information given the accuracy,
meta-I = I(S; R, C) - I(S; R).
This is equivalent to Dayan's formulation where meta-I is the information that confidence transmits about the correctness of a response,
meta-I = I(S = R; C).
Meta-I
is expressed in bits, i.e. the log base is 2).
The other measures are different normalizations of meta-I
and are unitless.
It should be noted that Dayan (2023) pointed out that a liberal or
conservative use of the confidence levels will affected the mutual
information and thus influence meta-I.
a data.frame
with one row for each subject and the following
columns:
participant
is the participant ID,
meta_I
is the estimated meta-I
value (expressed in bits, i.e. log base is 2),
meta_Ir1
is meta-I_{1}^{r}
,
meta_Ir1_acc
is meta-I_{1}^{r\prime}
,
meta_Ir2
is meta-I_{2}^{r}
, and
RMI
is RMI.
Sascha Meyen, saschameyen@gmail.com
Dayan, P. (2023). Metacognitive Information Theory. Open Mind, 7, 392–411. doi:10.1162/opmi_a_00091
# 1. Select two subjects from the masked orientation discrimination experiment
data <- subset(MaskOri, participant %in% c(1:2))
head(data)
# 2. Calculate meta-I measures with bias reduction (this may take 10 s per subject)
metaIMeasures <- estimateMetaI(data)
# 3. Calculate meta-I measures for all participants without bias reduction (much faster)
metaIMeasures <- estimateMetaI(MaskOri, bias_reduction = FALSE)
metaIMeasures
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.