Calculate the all metrics at once
get_all_metrics.Rd
This function computes a comprehensive set of fairness-related performance metrics across the levels of a sensitive attribute. It includes standard classification metrics (e.g., TPR, FPR, PPV, NPV) as well as fairness-specific indicators like predicted positive rates and error ratios.
Arguments
- data
Data frame containing the outcome, predicted outcome, and sensitive attribute
- outcome
the name of the outcome variable, it must be binary
- group
the name of the sensitive attribute
- probs
the name of the predicted outcome variable
- cutoff
the threshold for the predicted outcome, default is 0.5
- digits
the number of digits to round the result to, default is 2
Details
This is useful for quickly assessing multiple fairness dimensions of a binary classifier in one step.
Examples
# \donttest{
library(fairmetrics)
library(dplyr)
library(magrittr)
library(randomForest)
data("mimic_preprocessed")
set.seed(123)
train_data <- mimic_preprocessed %>%
dplyr::filter(dplyr::row_number() <= 700)
# Fit a random forest model
rf_model <- randomForest::randomForest(factor(day_28_flg) ~ ., data = train_data, ntree = 1000)
# Test the model on the remaining data
test_data <- mimic_preprocessed %>%
dplyr::mutate(gender = ifelse(gender_num == 1, "Male", "Female"))%>%
dplyr::filter(dplyr::row_number() > 700)
test_data$pred <- predict(rf_model, newdata = test_data, type = "prob")[, 2]
# Fairness evaluation
# We will use sex as the sensitive attribute and day_28_flg as the outcome.
# We choose threshold = 0.41 so that the overall FPR is around 5%.
# Calculate All Metrics
get_all_metrics(
dat = test_data,
outcome = "day_28_flg",
group = "gender",
probs = "pred",
cutoff = 0.41
)
#> Metric Group Female Group Male
#> 1 TPR 0.62 0.38
#> 2 FPR 0.08 0.03
#> 3 PPR 0.17 0.08
#> 4 PPV 0.62 0.66
#> 5 NPV 0.92 0.90
#> 6 ACC 0.87 0.88
#> 7 Brier Score 0.09 0.08
#> 8 FN/FP Ratio 1.03 3.24
#> 9 Avg Pred Prob 0.21 0.14
# }