CompareTests: Correct for Verification Bias in Diagnostic Accuracy & Agreement

A standard test is observed on all specimens. We treat the second test (or sampled test) as being conducted on only a stratified sample of specimens. Verification Bias is this situation when the specimens for doing the second (sampled) test is not under investigator control. We treat the total sample as stratified two-phase sampling and use inverse probability weighting. We estimate diagnostic accuracy (category-specific classification probabilities; for binary tests reduces to specificity and sensitivity, and also predictive values) and agreement statistics (percent agreement, percent agreement by category, Kappa (unweighted), Kappa (quadratic weighted) and symmetry tests (reduces to McNemar's test for binary tests)). See: Katki HA, Li Y, Edelstein DW, Castle PE. Estimating the agreement and diagnostic accuracy of two diagnostic tests when one test is conducted on only a subsample of specimens. Stat Med. 2012 Feb 28; 31(5): 10.1002/sim.4422.

Author
Hormuzd A. Katki and David W. Edelstein
Date of publication
2015-07-12 17:53:51
Maintainer
Hormuzd Katki <katkih@mail.nih.gov>
License
GPL-3
Version
1.1
URLs

View on CRAN

Man pages

CompareTests
Correct for Verification Bias in Diagnostic Accuracy &...
CompareTests-package
Correct for Verification Bias in Diagnostic Accuracy &...
fulltable
fulltable attaches margins and NA/NaN category to the output...
specimens
Fictitious data on specimens tested by two methods

Files in this package

CompareTests
CompareTests/NAMESPACE
CompareTests/NEWS
CompareTests/data
CompareTests/data/specimens.rda
CompareTests/R
CompareTests/R/fulltable.R
CompareTests/R/CompareTests.R
CompareTests/MD5
CompareTests/DESCRIPTION
CompareTests/man
CompareTests/man/specimens.Rd
CompareTests/man/fulltable.Rd
CompareTests/man/CompareTests.Rd
CompareTests/man/CompareTests-package.Rd