AI model collapse might be prevented by studying human language transmission
Briefly

Ilia Shumailov and colleagues demonstrate that training AI models using data generated by previous models can lead to 'model collapse', where the models increasingly detach from real-world information, producing nonsensical outputs.
In iterative training cycles, language models tend to generate sentences that seem probable but are increasingly divorced from human-like coherence, resulting in meaningless sequences that challenge the value of the generated content.
Read at Nature
[
|
]