DOI:
https://doi.org/10.14483/23448350.24623Published:
03/18/2026Issue:
Vol. 52 No. 1 (2025): January-April 2025Section:
EditorialThe Recursive Loop of AI-Generated Content
Keywords:
Artificial Intelligence (en).Downloads
Abstract (en)
In recent years, generative artificial intelligence (AI) based on large language models (LLMs) has started to produce a lot of online content, but can AI models be trained on data produced by other AI models? In this recursive loop, algorithms consume their own outputs, blurring the line between human-generated knowledge and machine-generated text. Training a new model based on AI-generated data can cause irreversible defects—this phenomenon is called model collapse. As a consequence, the model might begin to forget the nuances of genuine human language as it is trained on its own outputs. In other words, the model loses diversity and converges to a narrow and repetitive state. Unless new human-generated data are introduced to break the cycle, each generation of AI may perform worse. This recursive loop not only threatens the performance of future AI systems but also raises alarms about the integrity of the results produced by AI.
Abstract (es)
NA
References
NA
How to Cite
APA
ACM
ACS
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver
Download Citation
License
Copyright (c) 2026 Hector Florez; Santiago Ferreiros, Florencia Pollo Cattaneo

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
When submitting their article to the Scientific Journal, the author(s) certifies that their manuscript has not been, nor will it be, presented or published in any other scientific journal.
Within the editorial policies established for the Scientific Journal, costs are not established at any stage of the editorial process, the submission of articles, the editing, publication and subsequent downloading of the contents is free of charge, since the journal is a non-profit academic publication. profit.