Target Leakage
Target leakage, sometimes called data leakage, is one of the most difficult problems when developing an AI model. It happens when you train your algorithm on a data set that includes information that would not be available at the time of prediction when you apply that model to data you collect in the future. Since it already knows the actual outcomes, the model’s results will be unrealistically accurate for the training data, like bringing an answer sheet into an exam.