Goal: Practice measuring out-of-sample performance and avoiding overfitting.

Indian Patient Data

The Indian Liver Patient (ILP) data contain 583 records of 10 features based on various medical tests, plus the variable patient indicating if the individual is a liver patient.

ilp = read_csv("data/ilp.csv") %>% 
  mutate( patient = factor(patient))
glimpse(ilp)
Observations: 583
Variables: 11
$ Age     <dbl> 65, 62, 62, 58, 72, 46, 26, 29, 17, 55, 57, 72...
$ Gender  <chr> "Female", "Male", "Male", "Male", "Male", "Mal...
$ TB      <dbl> 0.7, 10.9, 7.3, 1.0, 3.9, 1.8, 0.9, 0.9, 0.9, ...
$ DB      <dbl> 0.1, 5.5, 4.1, 0.4, 2.0, 0.7, 0.2, 0.3, 0.3, 0...
$ Alkphos <dbl> 187, 699, 490, 182, 195, 208, 154, 202, 202, 2...
$ Sgpt    <dbl> 16, 64, 60, 14, 27, 19, 16, 14, 22, 53, 51, 31...
$ Sgot    <dbl> 18, 100, 68, 20, 59, 14, 12, 11, 19, 58, 59, 5...
$ TP      <dbl> 6.8, 7.5, 7.0, 6.8, 7.3, 7.6, 7.0, 6.7, 7.4, 6...
$ ALB     <dbl> 3.3, 3.2, 3.3, 3.4, 2.4, 4.4, 3.5, 3.6, 4.1, 3...
$ `A/G`   <dbl> 0.90, 0.74, 0.89, 1.00, 0.40, 1.30, 1.00, 1.10...
$ patient <fct> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE...
  1. Split the data into training and testing (80%-20%). Fit a classification tree (default parameters), and report its training and test set accuracy.

  2. Select an optimal tree model using cross-validation. Use the control options rpart.control(minsplit = 1, cp = 0) to grow the full tree, and then prune it down to select the one with the minimum cross-validation error. Report your new modelโ€™s training and test set accuracy.

  3. Create 95% confidence interval for the test accuracy of your optimal tree, using bootstrap 1000 samples (can use infer package) . Does the optimal model improve significantly over the first one?

  4. Plot the optimal tree model; does it help predict liver disease? Justify your answer.

Simulation experiment

We will use a simulation experiment to explore overfitting. Consider \(N=1000\) observations with random (equiprobable) binary response variable (Y), and \(p=250\) unrelated random features (V1:V250).

N = 1000; p = 250; 
set.seed(123)
toy = as_tibble( matrix( rnorm(N*p), ncol = p) ) %>% 
   mutate( Y = sample( c(0,1), size = N, replace = TRUE ) ) 
  1. What should be the out-of-sample performance of any classifier applied to this data? Explain.

  2. Split the data into training and test sets (80%-20%). Fit a logistic regression model using all 250 feature variables, and report its training and test accuracy. Do you believe there is overfitting?

  3. Repeat the previous part using only the first 50 feature variables. Do you see any difference? Explain why you think that is. (Hint: select the first 50 columns to feed into the model)

  4. Use cross-validation to estimate the out-of-sample error of the model with all 250 features. Split the training data into 5 non-overlapping folds, fit the model to each fold and calculate its error, and finally report the average cross-validation error over all folds.

  5. Simulate \(N=5000\) observations (rather than 1000) and fit the model with all 250 variables again. Report the training and test error (80%-20% split), what do you observe?

LS0tDQp0aXRsZTogIlIgTm90ZWJvb2siDQpvdXRwdXQ6IGh0bWxfbm90ZWJvb2sNCmVkaXRvcl9vcHRpb25zOiANCiAgY2h1bmtfb3V0cHV0X3R5cGU6IGlubGluZQ0KLS0tDQoNCmBgYHtyLCBpbmNsdWRlPUZBTFNFfQ0Kcm0obGlzdD1scygpKQ0KbGlicmFyeSh0aWR5dmVyc2UpDQpgYGANCg0KDQoqKiogDQoNCioqR29hbCoqOiBQcmFjdGljZSBtZWFzdXJpbmcgb3V0LW9mLXNhbXBsZSBwZXJmb3JtYW5jZSBhbmQgYXZvaWRpbmcgb3ZlcmZpdHRpbmcuDQoNCg0KIyMjIyBJbmRpYW4gUGF0aWVudCBEYXRhDQpUaGUgW0luZGlhbiBMaXZlciBQYXRpZW50IChJTFApXShodHRwczovL2FyY2hpdmUuaWNzLnVjaS5lZHUvbWwvZGF0YXNldHMvSUxQRCslMjhJbmRpYW4rTGl2ZXIrUGF0aWVudCtEYXRhc2V0JTI5KSBkYXRhIGNvbnRhaW4gNTgzIHJlY29yZHMgb2YgMTAgZmVhdHVyZXMgYmFzZWQgb24gdmFyaW91cyBtZWRpY2FsIHRlc3RzLCBwbHVzIHRoZSB2YXJpYWJsZSBgcGF0aWVudGAgaW5kaWNhdGluZyBpZiB0aGUgaW5kaXZpZHVhbCBpcyBhIGxpdmVyIHBhdGllbnQuDQoNCmBgYHtyLCBtZXNzYWdlPUZBTFNFfQ0KaWxwID0gcmVhZF9jc3YoImRhdGEvaWxwLmNzdiIpICU+JSANCiAgbXV0YXRlKCBwYXRpZW50ID0gZmFjdG9yKHBhdGllbnQpKQ0KZ2xpbXBzZShpbHApDQpgYGANCg0KMS4gU3BsaXQgdGhlIGRhdGEgaW50byB0cmFpbmluZyBhbmQgdGVzdGluZyAoODAlLTIwJSkuIEZpdCBhIGNsYXNzaWZpY2F0aW9uIHRyZWUgKGRlZmF1bHQgcGFyYW1ldGVycyksIGFuZCByZXBvcnQgaXRzIHRyYWluaW5nIGFuZCB0ZXN0IHNldCBhY2N1cmFjeS4NCg0KMi4gU2VsZWN0IGFuICpvcHRpbWFsKiB0cmVlIG1vZGVsIHVzaW5nIGNyb3NzLXZhbGlkYXRpb24uIFVzZSB0aGUgY29udHJvbCBvcHRpb25zIGBycGFydC5jb250cm9sKG1pbnNwbGl0ID0gMSwgY3AgPSAwKWAgdG8gZ3JvdyB0aGUgZnVsbCB0cmVlLCBhbmQgdGhlbiBwcnVuZSBpdCBkb3duIHRvIHNlbGVjdCB0aGUgb25lIHdpdGggdGhlIG1pbmltdW0gY3Jvc3MtdmFsaWRhdGlvbiBlcnJvci4gUmVwb3J0IHlvdXIgbmV3IG1vZGVsJ3MgdHJhaW5pbmcgYW5kIHRlc3Qgc2V0IGFjY3VyYWN5LiANCg0KMy4gQ3JlYXRlIDk1JSBjb25maWRlbmNlIGludGVydmFsIGZvciB0aGUgdGVzdCBhY2N1cmFjeSBvZiB5b3VyIG9wdGltYWwgdHJlZSwgdXNpbmcgYm9vdHN0cmFwIDEwMDAgc2FtcGxlcyAoY2FuIHVzZSBgaW5mZXJgIHBhY2thZ2UpIC4gRG9lcyB0aGUgb3B0aW1hbCBtb2RlbCBpbXByb3ZlIHNpZ25pZmljYW50bHkgb3ZlciB0aGUgZmlyc3Qgb25lPw0KDQo0LiBQbG90IHRoZSBvcHRpbWFsIHRyZWUgbW9kZWw7IGRvZXMgaXQgaGVscCBwcmVkaWN0IGxpdmVyIGRpc2Vhc2U/IEp1c3RpZnkgeW91ciBhbnN3ZXIuDQoNCg0KIyMjIyBTaW11bGF0aW9uIGV4cGVyaW1lbnQgDQoNCldlIHdpbGwgdXNlIGEgc2ltdWxhdGlvbiBleHBlcmltZW50IHRvIGV4cGxvcmUgb3ZlcmZpdHRpbmcuIENvbnNpZGVyICROPTEwMDAkIG9ic2VydmF0aW9ucyB3aXRoIHJhbmRvbSAoZXF1aXByb2JhYmxlKSBiaW5hcnkgcmVzcG9uc2UgdmFyaWFibGUgKGBZYCksIGFuZCAkcD0yNTAkIHVucmVsYXRlZCByYW5kb20gZmVhdHVyZXMgKGBWMTpWMjUwYCkuIA0KDQpgYGB7cn0NCk4gPSAxMDAwOyBwID0gMjUwOyANCnNldC5zZWVkKDEyMykNCnRveSA9IGFzX3RpYmJsZSggbWF0cml4KCBybm9ybShOKnApLCBuY29sID0gcCkgKSAlPiUgDQogICBtdXRhdGUoIFkgPSBzYW1wbGUoIGMoMCwxKSwgc2l6ZSA9IE4sIHJlcGxhY2UgPSBUUlVFICkgKSANCmBgYA0KDQo0LiBXaGF0IHNob3VsZCBiZSB0aGUgb3V0LW9mLXNhbXBsZSBwZXJmb3JtYW5jZSBvZiAqYW55KiBjbGFzc2lmaWVyIGFwcGxpZWQgdG8gdGhpcyBkYXRhPyBFeHBsYWluLg0KDQoNCjUuIFNwbGl0IHRoZSBkYXRhIGludG8gdHJhaW5pbmcgYW5kIHRlc3Qgc2V0cyAoODAlLTIwJSkuIEZpdCBhIGxvZ2lzdGljIHJlZ3Jlc3Npb24gbW9kZWwgdXNpbmcgYWxsIDI1MCBmZWF0dXJlIHZhcmlhYmxlcywgYW5kIHJlcG9ydCBpdHMgdHJhaW5pbmcgYW5kIHRlc3QgYWNjdXJhY3kuIERvIHlvdSBiZWxpZXZlIHRoZXJlIGlzIG92ZXJmaXR0aW5nPyANCg0KNi4gUmVwZWF0IHRoZSBwcmV2aW91cyBwYXJ0IHVzaW5nIG9ubHkgdGhlIGZpcnN0IDUwIGZlYXR1cmUgdmFyaWFibGVzLiBEbyB5b3Ugc2VlIGFueSBkaWZmZXJlbmNlPyBFeHBsYWluIHdoeSB5b3UgdGhpbmsgdGhhdCBpcy4NCihIaW50OiBgc2VsZWN0YCB0aGUgZmlyc3QgNTAgY29sdW1ucyB0byBmZWVkIGludG8gdGhlIG1vZGVsKQ0KDQoNCjcuIFVzZSBjcm9zcy12YWxpZGF0aW9uIHRvIGVzdGltYXRlIHRoZSBvdXQtb2Ytc2FtcGxlIGVycm9yIG9mIHRoZSBtb2RlbCB3aXRoIGFsbCAyNTAgZmVhdHVyZXMuIFNwbGl0IHRoZSB0cmFpbmluZyBkYXRhIGludG8gNSBub24tb3ZlcmxhcHBpbmcgZm9sZHMsIGZpdCB0aGUgbW9kZWwgdG8gZWFjaCBmb2xkIGFuZCBjYWxjdWxhdGUgaXRzIGVycm9yLCBhbmQgZmluYWxseSByZXBvcnQgdGhlIGF2ZXJhZ2UgY3Jvc3MtdmFsaWRhdGlvbiBlcnJvciBvdmVyIGFsbCBmb2xkcy4gDQoNCg0KOC4gU2ltdWxhdGUgJE49NTAwMCQgb2JzZXJ2YXRpb25zIChyYXRoZXIgdGhhbiAxMDAwKSBhbmQgZml0IHRoZSBtb2RlbCB3aXRoIGFsbCAyNTAgdmFyaWFibGVzIGFnYWluLiBSZXBvcnQgdGhlIHRyYWluaW5nIGFuZCB0ZXN0IGVycm9yICg4MCUtMjAlIHNwbGl0KSwgd2hhdCBkbyB5b3Ugb2JzZXJ2ZT8gDQoNCg==