Overfitting is the phenomenon of a model not performing well, i.e., not making good predictions, because it captured the noise as well as the signal in the training set. In other words, the model is generalizing too little and instead of just characterizing...
Underfitting is the phenomenon of a model not performing well, i.e., not making good predictions, because it wasn’t able to correctly or completely capture the signal in the training set. In other words, the model is generalizing too much, to the...
The AUC is the area under the ROC curve and is a performance measure that tells you how well your model can classify different classes. The higher the AUC the better the model.
In theory a good model (one that makes the right predictions) will be one that has both high precision, as well as high recall. In practice however, a model has to make compromises between both metrics. Thus, it can be hard to compare the performance...
Accuracy is a performance metric that allows you to evaluate how good your model is. It’s used in classification models and is the ratio of:
Tip: Accuracy is highly sensitive to class imbalances in your data.
Assume a data set that includes examples of a class Non-default and examples of class Default. Assume further that you’re evaluating your model’s performance to predict examples of class Non-default.False negatives is a field in the confusion...