Skip to contents

Examine accuracy parity of a model

Usage

eval_acc_parity(
  data,
  outcome,
  group,
  probs,
  cutoff = 0.5,
  alpha = 0.05,
  bootstraps = 2500,
  digits = 2,
  message = TRUE
)

Arguments

data

Data frame containing the outcome, predicted outcome, and sensitive attribute

outcome

Name of the outcome variable

group

Name of the sensitive attribute

probs

Predicted probabilities

cutoff

Cutoff value for the predicted probabilities

alpha

The 1 - significance level for the confidence interval, default is 0.05

bootstraps

Number of bootstraps to use for confidence intervals

digits

Number of digits to round the results to, default is 2

message

Whether to print the results, default is TRUE

confint

Logical indicating whether to calculate confidence intervals

Value

A list containing the following elements:

  • Accuracy for Group 1

  • Accuracy for Group 2

  • Difference in accuracy If confidence intervals are computed (confint = TRUE):

  • A vector of length 2 containing the lower and upper bounds of the 95% confidence interval for the difference in accuracy