Six Realization Methods of logistic Calibration Curve (Test Set)

Regarding the basic knowledge of clinical prediction models, the editor has written a very detailed tutorial before, including the definition of clinical prediction models, common evaluation methods, nomograms, ROC curves, IDI, NRI, calibration curves, decision curves, etc.

All code and data available for free: R language clinical prediction model collection

The above collection includes most of the content of the clinical prediction model, and the content is definitely comparable to a training class of several thousand yuan! It is definitely not a problem to complete an SCI with the above tutorial!

However, there are several problems left unresolved in the basic version of the collection, mainly focusing on the use of machine learning algorithms in clinical prediction models and the detailed interpretation and implementation of clinical prediction models, without detailed tutorials. Therefore, the following series of articles on clinical prediction models will focus on the above issues.

What I will show you today is how to realize the calibration curve of the test set (or verification set) (in fact, it has been introduced, but it is not mentioned separately, some fans have been asking in the background).

Catalog of this issue:

  • prepare data
  • data segmentation
  • Calibration curve for the training set
  • Test Set Calibration Curve Method 1
  • Test Set Calibration Curve Method 2
  • Test Set Calibration Curve Method 3

prepare data

The data comes from this tweet: Drawing of calibration curve for dichotomous data , the data acquisition method is also given in the tweet above.

lowbirth <- read.csv("../000 predictive model/lowbirth.csv")
lowbirth$black = ifelse(lowbirth$race == "black",1,0)
lowbirth$white = ifelse(lowbirth$race == "white",1,0)
lowbirth$other = ifelse(lowbirth$race %in% c("native American","oriental"),1,0)
lowbirth$delivery = factor(lowbirth$delivery)
lowbirth$sex <- factor(lowbirth$sex)
lowbirth$race <- NULL
str(lowbirth)
## 'data.frame': 565 obs. of  12 variables:
##  $ birth   : num  81.5 81.6 81.6 81.6 81.6 ...
##  $ lowph   : num  7.25 7.06 7.25 6.97 7.32 ...
##  $ pltct   : int  244 114 182 54 282 153 229 182 361 378 ...
##  $ bwt     : int  1370 620 1480 925 1255 1350 1310 1110 1180 970 ...
##  $ delivery: Factor w/ 2 levels "abdominal","vaginal": 1 2 2 1 2 1 2 2 1 2 ...
##  $ apg1    : int  7 1 8 5 9 4 6 6 6 2 ...
##  $ vent    : int  0 1 0 1 0 0 1 0 0 1 ...
##  $ sex     : Factor w/ 2 levels "female","male": 1 1 2 1 1 1 2 2 2 1 ...
##  $ dead    : int  0 1 0 1 0 0 0 0 0 1 ...
##  $ black   : num  0 1 1 1 1 1 0 1 0 0 ...
##  $ white   : num  1 0 0 0 0 0 1 0 1 1 ...
##  $ other   : num  0 0 0 0 0 0 0 0 0 0 ...
copy

data segmentation

Randomly divide the data into training set and test set with a ratio of 7:3

set.seed(123)
ind <- sample(1:nrow(lowbirth),nrow(lowbirth)*0.7)

train_df <- lowbirth[ind,]
test_df <- lowbirth[- ind, ]
copy

Calibration curve for the training set

In the previous tweet, the calibration curve of this two-class data training set introduced a lot of methods to everyone:

Here we directly use the rms package to implement it, which has been introduced in detail in the above tweet, so I won't explain it here.

library(rms)

dd <- datadist(train_df)
options(datadist="dd")

fit1 <- lrm(dead ~ birth + lowph + pltct + bwt + vent + black + white,
            data = train_df,x=T,y=T)

cal1 <- calibrate(fit1, method='boot', B=100)

plot(cal1,
     xlim = c(0,1),
     ylim = c(0,1),
     xlab = "Prediced Probability",
     ylab = "Observed Probability"
     ) 
copy

plot of chunk unnamed-chunk-3

## 
## n=395   Mean absolute error=0.01   Mean squared error=0.00016
## 0.9 Quantile of absolute error=0.021
copy

Test Set Calibration Curve Method 1

The calibration curve of the test set is very simple for logistic regression. Any algorithm that can calculate the probability can easily draw the calibration curve of the training set and the test set, which is nothing more than calculating the actual probability and predicted probability.

The calibration curve of the binary data test set has also been introduced many times in previous tweets, such as:

The above are 3 implementation methods, in fact, the essence is the same, the first 2 are manual calculation, the last one saves the steps of calculation by yourself, directly gives you the graphics, and perfectly inherits the usage of yardstick.

Here I will introduce 3 more methods to you, plus the methods introduced above, the calibration curve of the logistic test set introduces 6 methods in total!

This method is based on the rms package.

# First get the prediction results of the test set
phat <- predict(fit1, test_df, type = 'fitted')

# It can be achieved directly using val.prob
val.prob(phat, test_df$dead,statloc = F,cex = 1)
copy

plot of chunk unnamed-chunk-4

##         Dxy     C (ROC)          R2           D    D:Chi-sq         D:p 
##  0.74384874  0.87192437  0.42860880  0.28181902 48.90923309          NA 
##           U    U:Chi-sq         U:p           Q       Brier   Intercept 
## -0.01085663  0.15437207  0.92571762  0.29267565  0.08935692  0.12059928 
##       Slope        Emax         E90        Eavg         S:z         S:p 
##  1.08566597  0.29112563  0.07879941  0.03183303 -0.54088957  0.58858370
copy

This result, you can use it as the calibration curve of the test set, but in fact the real function of the val.prob function is to realize the calibration curve of the external validation data (external validation), which is in the book written by Harrell: Regression Modeling Strategies It is written very clearly, or you can read the help documentation of the function.

So this method does not give you the option of resampling, because the author believes that external validation is the final test of the model and does not require resampling.

You may have seen in the literature that the calibration curves of the training set and the test set are both in the style of the above picture. Similar to the picture below, the training set and the test set are the same, and the implementation method is also very simple.

# Get the prediction results of the training set
phat_train <- predict(fit1, train_df, type = 'fitted')

# It can be achieved directly using val.prob
val.prob(phat_train, train_df$dead,cex = 1)
copy
          Dxy       C (ROC)            R2             D      D:Chi-sq           D:p 
 7.595104e-01  8.797552e-01  4.274140e-01  2.924665e-01  1.165243e+02            NA 
            U      U:Chi-sq           U:p             Q         Brier     Intercept 
-5.063291e-03  2.842171e-14  1.000000e+00  2.975298e-01  9.802204e-02 -8.039770e-10 
        Slope          Emax           E90          Eavg           S:z           S:p 
 1.000000e+00  3.476792e-02  3.030223e-02  1.012113e-02 -1.990865e-01  8.421951e-01 
copy

The graph above can be used as a calibration curve for the training set.

Test Set Calibration Curve Method 2

If you have to resample the calibration curve of the test set, it is actually very simple (there are many ways to achieve it except rms).

Here it is still implemented with the rms package.

The calibration curve of the binary classification data is just to calculate the actual probability and predicted probability. Based on this principle, we can realize it by ourselves. The method is as follows:

# The first is to obtain the predicted value of the test set
phat <- predict(fit1, test_df)
test_df$phat <- phat # added to the test set

# With the predicted value as the independent variable and the outcome variable as the dependent variable, logistic regression is established on the test set
fit2 <- lrm(dead ~ phat, data = test_df,x=T,y=T)

# Draw a calibration curve for this logistic regression
cal2 <- calibrate(fit2, method='boot', B=100)

plot(cal2,
     xlim = c(0,1),
     ylim = c(0,1),
     xlab = "Prediced Probability",
     ylab = "Observed Probability"
     ) 
copy

plot of chunk unnamed-chunk-5

## 
## n=170   Mean absolute error=0.031   Mean squared error=0.00183
## 0.9 Quantile of absolute error=0.078
copy

This graph is the calibration curve of the test set, and resampled using the bootstrap method.

It can be seen that the two pictures are actually the same. The only difference is that our manual implementation method has an extra correction curve of resampling 100 times, and the rest are the same!

Test Set Calibration Curve Method 3

Use the riskRegression package. This is the method I recommend, this package really works! You may have heard of another package: pec, riskRegression is an upgraded version of pec, and most of the functions have been transferred to riskRegression.

# training set model
fit <- glm(dead ~ birth + lowph + pltct + bwt + vent + black + white,
            data = train_df, family = binomial)

library(riskRegression)
## riskRegression version 2022.03.22

# Performance on the test set
score <- Score(list("fit"=fit),
               formula = dead ~ 1,
               data = test_df, # Here you can choose the test set
               metrics = c("auc","brier"),
               summary = c("risks","IPA","riskQuantile","ibs"),
               plots = "calibration",
               null.model = T,
               conf.int = T,
               B = 100,
               M = 50
               )

# just draw
plotCalibration(score,col="tomato",
                xlab = "Predicted Risk",
                ylab = "Observerd RISK",
                bars = F)
copy

plot of chunk unnamed-chunk-6

And this method can return the data and beautify it with ggpllot2. For details, refer to Drawing of calibration curve for dichotomous data , I won’t introduce much here.

The calibration curve of logistic is really simple, and the calibration curve of the Cox regression test set will be introduced next time.

Posted by wemustdesign on Tue, 14 Feb 2023 20:38:26 +1030