The AUC is the area under the ROC curve and is a performance measure that tells you how well your model can classify different classes. The higher the AUC the better the model.
In theory a good model (one that makes the right predictions) will be one that has both high precision, as well as high recall. In practice however, a model has to make compromises between both metrics. Thus, it can be hard to compare the performance...
Accuracy is a performance metric that allows you to evaluate how good your model is. It’s used in classification models and is the ratio of:
Tip: Accuracy is highly sensitive to class imbalances in your data.
Assume a data set that includes examples of a class Non-default and examples of class Default. Assume further that you’re evaluating your model’s performance to predict examples of class Non-default.False negatives is a field in the confusion...
Assume a data set that includes examples of a class Non-default and examples of class Default. Assume further that you’re evaluating your model’s performance to predict examples of class Non-default.False positives is a field in the confusion...
Assume a data set that includes examples of a class Non-default and examples of class Default. Assume further that you’re evaluating your model’s performance to predict examples of class Non-default.True negatives is a field in the confusion...