Task Contamination

Task Contamination is a type of data leakage that affects the perceived capabilities of Large Language Models. Namely: the fact that some LLMs perform well on some N-shot learning tasks may be due to the fact that their training data included examples of those tasks, in which case they are not in fact acting as N-shot learners.
Related concepts:
Zero-Shot, One-Shot, and Few-Shot LearningData Leakage
Related article:
Training, Validation, Test, and Data Leakage
External reference:
https://browse.arxiv.org/html/2312.16337v1