Reflexiones y percepciones sobre la evaluación automatizada del discurso escrito

Perceptions on the automated assessment of written discourse

  • Ricardo Alberto Benítez Universidad Católica de Valparaíso, Chile
Palabras clave: assessment method, written language, feedback (en_US)
Palabras clave: método de evaluación, lengua escrita, retroalimentación (es_ES)

Resumen (es_ES)

Este artículo presenta reflexiones y percepciones acerca de un potencial problema en la enseñanza y aprendizaje de la escritura en el ámbito escolar y universitario: su evaluación con instrumentos computarizados. El conflicto entre esta evaluación y la realizada por humanos puede constituir motivo de debate y análisis, como ya lo es en Estados Unidos. Allí, varios experimentos han intentado automatizar las evaluaciones de composiciones escritas. Sin embargo, los resultados han sido insatisfactorios desde el punto de vista tanto de los investigadores como de los estudiantes, a cuyos productos escritos se les ha aplicado dicha evaluación. Este artículo, por tanto, surge de la inquietud por indagar las percepciones de estudiantes universitarios acerca de su disposición a recibir retroalimentación y evaluación de sus composiciones escritas a través de un instrumento automatizado. Para recabar esta información, se les solicitó que contestaran en línea si estarían dispuestos o no a someter la evaluación de sus trabajos escritos a una computadora con la posibilidad de agregar comentarios. Las respuestas y los comentarios fueron categorizados como positivos, negativos e indecisos. Las negativas obtuvieron la mayoría, seguidas de las positivas y las indecisas. Se concluye que faltan estudios de mayor envergadura que exploren las percepciones tanto de estudiantes como de profesores de producción escrita para obtener resultados más consistentes y que las instituciones que decidan.

Resumen (en_US)

This article describes the thoughts and perceptions regarding a potential problem when teaching and learning writing skills in school and university environments: assessments of writing using computerised instruments. The conflict between this assessment and the one carried out by humans may be subject to discussion and analysis, as it has been in the United States, where several experiments have attempted to automate written compositions. However, the results have been unsatisfactory according to both the researchers and the students whose written output had been subjected to such assessment. Therefore, the present article aims to enquire into the perceptions of university students with respect to their willingness to receive feedback and evaluations on their written compositions through an automated instrument. To gather this information, students were asked to respond online as to whether they would be willing to subject their work to evaluation by a computer, including the option to add comments. Responses and comments were classified as positive, negative and undecided. Most of them responded negatively, followed by positive and undecided votes. It can be concluded that there is a need for further in-depth studies exploring the perception of both students and professors to obtain more consistent results. In addition, institutions that opt to use automated instruments to assess written assignments should explain its advantages and disadvantages.

Descargas

La descarga de datos todavía no está disponible.

Biografía del autor/a

Ricardo Alberto Benítez, Universidad Católica de Valparaíso, Chile

Master of Arts in the Teaching of English as a Second Language (MA in TESL), Arizona State University, Estados Unidos. Profesor adjunto de la Universidad Católica de Valparaíso, Chile.

Referencias

Anson, C. (2006). Can't touch this: Reflections on the servitude of computers as readers. En P.F. Ericsson y R. Haswell (eds.), Machine scoring of student essays (pp. 38-56). Logan, UT: Utah State University Press. Recuperado de https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1138&context=usupress_pubs. https://doi.org/10.2307/j.ctt4cgq0p.6

Benítez, R. (2001). La situación retórica: Su importancia en el aprendizaje y en la enseñanza de la producción escrita. Revista Signos, XXXIV(48), 29-48. Universidad Católica de Valparaíso.

Blakeslee, A. (2001). Interacting with audiences: Social influences on the production of scientific writing. Mahwah, NJ: Erlbaum Associates, Inc. https://doi.org/10.4324/9781410600097

Bridgeman, B., Trapani, C. y Yigal, A. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27-40. https://doi.org/10.1080/08957347.2012.635502

Burstein, J. y Marcus, D. (2003). A machine learning approach for identification of thesis and conclusion statements in student essays. Computers and the Humanities, 37, 455-467. https://doi.org/10.1023/A:1025746505971

Byrne, R., Tang, M., Truduc, J. y Tang, M. (2010). eGrader, a software application that automatically scores student essays: with a postscript on the ethical complexities. Journal of Systemics, Cybernetics & Informatics, 8(6), 30-35.

Carbonell, J. (1992). El procesamiento del lenguaje natural, tecnología en transición. En Actas del Congreso de la Lengua Española (pp. 247-250). Sevilla, 7 al 10 de octubre. Recuperado de https://cvc.cervantes.es/obref/congresos/sevilla/tecnologias/ponenc_carbonell.htm

Chen, E. y Cheng, W. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning and Technology, 12(2), 94-112.

Collier, L. (2012). Robo-grading of student writing is fueled by new study-But earns "F" from experts. Recuperado de http://www.lornacollier.com/robogradingCC912.pdf

Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing 18, 100-108. https://doi.org/10.1016/j.asw.2012.11.001

Corbo, V. (julio de 2017). Preparándonos para el siglo XXI: Retos y oportunidades de la automatización. El Mercurio de Santiago, B, p. 5.

Corbo, V. (mayo de 2018). Los retos de las nuevas tecnologías para las políticas públicas. El Mercurio de Santiago, B, p. 7.

Dikli, S. (2010). The nature of automated essay scoring feedback. CALICO Journal, 28(1), 99-134. https://doi.org/10.11139/cj.28.1.99-134

Ede, L. y Lunsford, A. (1988). Audience addressed/audience invoked: The role of audience in composition theory and pedagogy. En G. Tate y E. Corbett (eds.), The writing teacher sourcebook (pp. 169-184). Oxford: Oxford University Press.

Haimson, L. (2016). Should you trust a computer to grade your child's writing on Common Core tests? The Washington Post. Recuperado de https://www.washingtonpost.com/news/answer-sheet/wp/2016 /05/05/should-you-trust-a-computer-to-grade-your-childs-writing-on-common-core-tests/?utm_term=.b575f2f27950

Harris, J. (1990). The idea of community in the study of writing. En R. Graves (ed.), Rhetoric and composition. A sourcebook for teachers and writers (pp. 267-278). Portsmouth, Boynton: Cook Publishers.

Herrington, A. y Moran, C. (2001). What happens when machines read our students' writing? College English, 63, 480-499. https://doi.org/10.2307/378891

Hesse, D. (2013). Interview with Valerie Strauss. Grading writing: The art and science -and why computers can't do it. The Washington Post. Recuperado de https://www.washingtonpost.com/news/answer-sheet/wp/2013/05/02/grading-writing-the-art-and-science-and-why-computers-cant-do-it/

Jones, I. (2003). Collaborative writing and children's use of literate language: A sequential analysis of social interaction. Journal of Early Childhood Literacy, 3(2), 165-178. https://doi.org/10.1177/14687984030032003

Klobucar, A., Deane, P., Elliot, N., Raminie, C., Deess, P. y Rudniy. A. (2012). Automated essay scoring and the search for valid writing assessment. En C. Bazerman, C. Dean, J. Early, K. Lunsford, S. Null, P. Rogers y A. Stansell (eds.), International Advances in Writing Research: Cultures, Places, Measures (pp. 103-119). Fort Collins, CO: WAC Clearinghouse & Parlor Press.

Kolowich, S. (2014). Writing instructor, skeptical of automated grading, pits machine vs. machine. The Chronicle of Higher Education. Recuperado de https://www.chronicle.com/article/Writing-Instructor-Skeptical/146211

Marcano, J. (diciembre de 2018). En 25 años más existirán máquinas capaces de imitar la inteligencia humana. El Mercurio de Santiago, A, p. 14.

McCurry, D. (2010). Can machine scoring deal with broad and open writing tests as well as human readers? Assessing Writing, 15(2), 118-129. https://doi.org/10.1016/j.asw.2010.04.002

National Council of Teachers of English (NCTE) (2013). NCTE position statement on machine scoring. Recuperado de http://www2.ncte.org/statement/machine_scoring/

Neal, M. (2011). Writing assessment and the revolution in digital texts and technologies. Nueva York: Teachers College Press.

Noël, S. y Robert, M.J. (2004). Empirical study on collaborative writing: What do co-authors do, use, and like? Computer-Supported Cooperative Work, 13, 63-89. https://doi.org/10.1023/B:COSU.0000014876.96003.be

Rodríguez, J. (julio de 2017a). Tecnologías y biociencias: El futuro que ya llegó. El Mercurio de Santiago, E, p. 3.

Rodríguez, J. (diciembre de 2017b). La utopía del hombre-máquina: ¿El futuro será transhumanista? El Mercurio de Santiago, E, 2-3.

Sandene, B., Horkay, N., Bennet, R., Elliot, R., Allen, N., Braswell, J., Kaplan, B. y Oranje, A. (2005). Part II: Online writing assessment. Online assessment in mathematics and writing: Reports from the NAEP Technology-Based Assessment Project, Research and Development Series. NCES 2005-457. U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing.

Scharber, C., Dexter, S. y Riedel, E. (2008). Students' experiences with an automated essay scorer. Journal of Technology, Learning and Assessment, 7(1), 1-45. Recuperado de http://www.jtla.org

Shermis, M., Mzumara, H., Olson, J. y Harrington, S. (2001). On-line grading of student essays: PEG goes on the world wide web. Assessment and Evaluation in Higher Education, 26(3), 247-260. https://doi.org/10.1080/02602930120052404

Spivey, N. (1997). The constructivist metaphor: Reading, writing, and the making of meaning. San Diego, CA: Academic Press. https://doi.org/10.2307/358470

Thomas, P. (2016). Not How to Enjoy Grading but Why to Stop Grading. Recuperado de https://radicalscholarship.wordpress.com/2016/05/06/not-how-to-enjoy-grading-but-why-to-stop-grading

Vojak, C., Kline, S., Cope, B., McCarthey, S. y Kalantzis, M. (2011). New spaces and old places: An analysis of writing assessment software. Computers and Composition, 28, 97-111. https://doi.org/10.1016/j.compcom.2011.04.004

Whithaus, C. (2005). Teaching and evaluating writing in the age of computers and high-stakes testing. Mahwah, NJ: Lawrence Erlbaum. https://doi.org/10.4324/9781410613691

Whittington, D. y Hunt, H. (1999). Approaches to the computerized assessment of free text responses. En Proceedings of the Third Annual Computer Assisted Assessment Conference. Loughborough, Inglaterra: Loughborough University.

Williams, J. (1998). Preparing to teach writing. Research, theory, and practice. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

Wilson, B. (1996). Constructivist learning environments: Case studies in instrumental designs. Englewood Cliffs, NJ: Educational Technology Publications.

Cómo citar
Benítez, R. A. (2019). Reflexiones y percepciones sobre la evaluación automatizada del discurso escrito. Enunciación, 24(2), 227-240. https://doi.org/10.14483/22486798.14311
Publicado: 2019-12-20
Sección
Pedagogías de la lengua