DOI:
https://doi.org/10.14483/23448350.24623Publicado:
03/18/2026Número:
Vol. 52 Núm. 1 (2025): Enero-Abril 2025Sección:
EditorialThe Recursive Loop of AI-Generated Content
Palabras clave:
Artificial Intelligence (en).Descargas
Resumen (en)
In recent years, generative artificial intelligence (AI) based on large language models (LLMs) has started to produce a lot of online content, but can AI models be trained on data produced by other AI models? In this recursive loop, algorithms consume their own outputs, blurring the line between human-generated knowledge and machine-generated text. Training a new model based on AI-generated data can cause irreversible defects—this phenomenon is called model collapse. As a consequence, the model might begin to forget the nuances of genuine human language as it is trained on its own outputs. In other words, the model loses diversity and converges to a narrow and repetitive state. Unless new human-generated data are introduced to break the cycle, each generation of AI may perform worse. This recursive loop not only threatens the performance of future AI systems but also raises alarms about the integrity of the results produced by AI.
Resumen (es)
NA
Referencias
NA
Cómo citar
APA
ACM
ACS
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver
Descargar cita
Licencia
Derechos de autor 2026 Hector Florez; Santiago Ferreiros, Florencia Pollo Cattaneo

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-CompartirIgual 4.0.
El (los) autor(es) al enviar su artículo a la Revista Científica certifica que su manuscrito no ha sido, ni será presentado ni publicado en ninguna otra revista científica.
Dentro de las políticas editoriales establecidas para la Revista Científica en ninguna etapa del proceso editorial se establecen costos, el envío de artículos, la edición, publicación y posterior descarga de los contenidos es de manera gratuita dado que la revista es una publicación académica sin ánimo de lucro.