Skip to contents

[Experimental]

Provides a concise summary of the content of MCTab objects. Computes sensitivity, specificity, positive and negative predictive values and positive and negative likelihood ratios for a diagnostic test with reference/gold standard. Computes positive/negative percent agreement, overall percent agreement and Kappa when the new test is evaluated by comparison to a non-reference standard. Computes average positive/negative agreement when the both tests are all not the reference, such as paired reader precision.

Usage

getAccuracy(object, ...)

# S4 method for MCTab
getAccuracy(
  object,
  ref = c("r", "nr", "bnr"),
  alpha = 0.05,
  r_ci = c("wilson", "wald", "clopper-pearson"),
  nr_ci = c("wilson", "wald", "clopper-pearson"),
  bnr_ci = "bootstrap",
  bootCI = c("perc", "norm", "basic", "stud", "bca"),
  nrep = 1000,
  rng.seed = NULL,
  digits = 4,
  ...
)

Arguments

object

(MCTab)
input from diagTab function to create 2x2 contingency table.

...

other arguments to be passed to DescTools::BinomCI.

ref

(character)
reference condition. It is possible to choose one condition for your require. The r indicates that the comparative test is standard reference, nr indicates the comparative test is not a standard reference, and bnr indicates both the new test and comparative test are not references.

alpha

(numeric)
type-I-risk, \(\alpha\).

r_ci

(string)
string specifying which method to calculate the confidence interval for a diagnostic test with reference/gold standard. Default is wilson. Options can be wilson, wald and clopper-pearson, see DescTools::BinomCI.

nr_ci

(string)
string specifying which method to calculate the confidence interval for the comparative test with non-reference standard. Default is wilson. Options can be wilson, wald and clopper-pearson, see DescTools::BinomCI.

bnr_ci

(string)
string specifying which method to calculate the confidence interval for both tests are not reference like reader precision. Default is bootstrap. But when the point estimate of ANA or APA is equal to 0 or 100%, the method will be changed to transformed wilson.

bootCI

(string)
string specifying the which bootstrap confidence interval from boot.ci() function in boot package. Default is perc(bootstrap percentile), options can be norm(normal approximation), boot(basic bootstrap), stud(studentized bootstrap) and bca(adjusted bootstrap percentile).

nrep

(integer)
number of replicates for bootstrapping, default is 1000.

rng.seed

(integer)
number of the random number generator seed for bootstrap sampling. If set to NULL currently in the R session used RNG setting will be used.

digits

(integer)
the desired number of digits. Default is 4.

Value

A data frame contains the qualitative diagnostic accuracy criteria with three columns for estimated value and confidence interval.

  • sens: Sensitivity refers to how often the test is positive when the condition of interest is present.

  • spec: Specificity refers to how often the test is negative when the condition of interest is absent.

  • ppv: Positive predictive value refers to the percentage of subjects with a positive test result who have the target condition.

  • npv: Negative predictive value refers to the percentage of subjects with a negative test result who do not have the target condition.

  • plr: Positive likelihood ratio refers to the probability of true positive rate divided by the false negative rate.

  • nlr: Negative likelihood ratio refers to the probability of false positive rate divided by the true negative rate.

  • ppa: Positive percent agreement, equals to sensitivity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.

  • npa: Negative percent agreement, equals to specificity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.

  • opa: Overall percent agreement.

  • kappa: Cohen's kappa coefficient to measure the level of agreement.

  • apa: Average positive agreement refers to the positive agreements and can be regarded as weighted ppa.

  • ana: Average negative agreement refers to the negative agreements and can be regarded as weighted npa.

Examples

# For qualitative performance
data("qualData")
tb <- qualData %>%
  diagTab(
    formula = ~ CandidateN + ComparativeN,
    levels = c(1, 0)
  )
getAccuracy(tb, ref = "r")
#>         EST LowerCI UpperCI
#> sens 0.8841  0.8200  0.9274
#> spec 0.8710  0.7655  0.9331
#> ppv  0.9385  0.8833  0.9685
#> npv  0.7714  0.6605  0.8541
#> plr  6.8514  3.5785 13.1181
#> nlr  0.1331  0.0832  0.2131
getAccuracy(tb, ref = "nr", nr_ci = "wilson")
#>          EST LowerCI UpperCI
#> ppa   0.8841  0.8200  0.9274
#> npa   0.8710  0.7655  0.9331
#> opa   0.8800  0.8277  0.9180
#> kappa 0.7291  0.6283  0.8299

# For Between-Reader precision performance
data("PDL1RP")
reader <- PDL1RP$btw_reader
tb2 <- reader %>%
  diagTab(
    formula = Reader ~ Value,
    bysort = "Sample",
    levels = c("Positive", "Negative"),
    rep = TRUE,
    across = "Site"
  )
getAccuracy(tb2, ref = "bnr")
#>        EST LowerCI UpperCI
#> apa 0.9479  0.9266  0.9690
#> ana 0.9540  0.9342  0.9726
#> opa 0.9511  0.9311  0.9711
getAccuracy(tb2, ref = "bnr", rng.seed = 12306)
#>        EST LowerCI UpperCI
#> apa 0.9479  0.9260  0.9686
#> ana 0.9540  0.9342  0.9730
#> opa 0.9511  0.9311  0.9711