METADATA


Title: Fostering a ‘Human with AI’ approach for evaluating students’ writing in English.

 

Vol. 13(1), 2025, pp. 140-159

DOI: https://doi.org/10.46687/VFBZ9792.  

 

Author: Abdu Al-Kadi

About the author: Abdu Al-Kadi, an associate professor of applied linguistics at Bergen University, has taught English for over two decades in EFL, ESL, and ESP contexts. He has undertaken a two-year fellowship at Philadelphia University in Jordan, and participated in international conferences and seminars. His research interests lie mainly in technology-based language education, TESOL, and SLA. His publication record includes journal articles, book chapters, book reviews, and newsletters.

E-mail: abdu.al-kadi@uib.no    ORCID: https://orcid.org/0000-0003-3805-7507

 

Link: http://silc.fhn-shu.com/issues/2025-1/SILC_2025_Vol_13_Issue_1_140-159_20.pdf

 

Citation (APA): Al-Kadi, A. (2025). Fostering a ‘Human with AI’ approach for evaluating students’ writing in English. Studies in Linguistics, Culture, and FLT, 13(1), 140-159. https://doi.org/10.46687/VFBZ9792.

 

Abstract: With the increasing interest in Artificial Intelligence (AI) within academia, Automated Writing Evaluation (AWE) has gained significant traction. However, its full impact on writing skills remains a topic of contention. This paper examines how AI-based feedback helps undergraduate students overcome challenges they encounter when they write essays as part of their university studies. A text-oriented approach to writing evaluation was adopted to demonstrate how Grammarly was utilized to screen and score a sample of 22 short essays (11,050 words). In addition to assigning numerical scores to the essays, Grammarly provided student writers with detailed feedback, including error reports that were subsequently analyzed. Then, two focus group discussions were administered to shed more light on Grammarly-based evaluation. Findings showed that AWE encourages learners to observe their errors and refine their essays accordingly before submitting them to teachers for scoring. Nevertheless, such a tool per se is short to provide a thorough evaluation. It could be used in tandem with peer review and teachers’ evaluation. The paper closes on some implications and suggestions to foreground AWE in academic writing courses besides, but not a surrogate to, human raters’ feedback.

Keywords: Academic writing (AW), Artificial Intelligence (AI), Assessment, Automated Writing Evaluation (AWE), Grammarly

 

 

References:

Abu Qub’a, A., Abu Guba, M. N., & Fare, S. (2024). Exploring the use of Grammarly in assessing English academic writing. Heliyon, 10(15), e34893. https://doi.org/10.1016/j.heliyon.2024.e34893.

Alexander, K., Savvidou, C., & Alexander, C. (2023). Who wrote this essay? Detecting AI-generated writing in second language education in higher education. Teaching English with Technology, 23(2), 25–43. https://doi.org/10.56297/BUKA4060/XHLD5365.

Al-kadi, A., & Ali, J. K. M. (2024). A holistic approach to ChatGPT, Gemini, and Copilot in English learning and teaching. Language Teaching Research Quarterly, 43, 155-166. https://doi.org/10.32038/ltrq.2024.43.09.

Ayan, A. D., & Erdemir, N. (2023). EFL teachers’ perceptions of automated written corrective feedback and Grammarly. Ahmet Keleşoğlu Eğitim Fakültesi Dergisi (AKEF) Dergisi, 5(3), 1183-1198. https://doi.org/10.38151/akef.2023.106.

Bailey, D., & Lee, A. R. (2020). An exploratory study of Grammarly in the language learning context: An analysis of test-based, textbook-based and Facebook corpora. TESOL International Journal, 15(2), 4-27.

Barrett, A., & Pack, A. (2023). Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(59), 1-24. https://doi.org/10.1186/s41239-023-00427-0.

Brooks, G. (2013). Assessment and academic writing: A look at the use of rubrics in the second language writing classroom. Kwansei Gakuin University Humanities Review, 17, 227-240.

Chen, C., & Cheng, W. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94-112. http://dx.doi.org/10125/44145.

Cotos, E. (2014). Automated writing evaluation. In E. Cotos (Ed.), Genre-based automated writing evaluation for L2 research writing: From design to evaluation and enhancement (pp. 40-64). New York, NY: Palgrave Macmillan. https://doi.org/10.1057/9781137333377.

Elliot, N., & Klobucar, A. (2013). Automated essay evaluation and the teaching of writing.  In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 16-35). New York & London: Routledge.

Fan, N. (2023). Exploring the effects of automated written corrective feedback on EFL students’ writing quality: A mixed-methods study. SAGE Open, 13(2), 1-17. https://doi.org/10.1177/21582440231181296.

Ghalib, T. (2018). EFL writing assessment and evaluation rubrics in Yemen. In A. Ahmed & H. Abouabdelkader (Eds.), Assessing EFL writing in the 21st century Arab World: Revealing the unknown (pp. 261-284). Palgrave Macmillan. https://doi.org/10.1007/978-3-319-64104-1_10.

Giray, L., Sevnarayan, K., & Madiseh, F. R. (2025). Beyond policing: AI writing detection tools, trust, academic integrity, and their implications for college writing. Internet Reference Services Quarterly29(1), 83–116. https://doi.org/10.1080/10875301.2024.2437174.

Giray, L. (2025). Death of the old teacher: Navigating AI in education through Kubler-Ross model. ECNU Review of Education, 1–8. https://doi.org/10.1177/20965311251319049.

Harmer, J. (2004). How to teach writing. London: Longman.

Harvey, M. (2003). The nuts and bolts of college writing. Indianapolis/Cambridge: Hackett Publishing Company.

Hockly, N. (2018). Automated writing evaluation, ELT Journal. doi:10.1093/elt/ccy044.

Khoshnevisan, B. (2019). The affordances and constraints of automatic writing evaluation (AWE) tools: A case for Grammarly. ARTESOL EFL Journal, 2(2), 12-25.

Kowal, S., & O’Connell, D. C. (2014). Transcription as a crucial step of data analysis. In U. Flick (Ed.), The SAGE handbook of qualitative data analysis (pp. 64-78). SAGE Publications. https://doi.org/10.4135/9781446282243.n5.

Link, S., Mehrzad, M., & Rahimi, M. (2020). Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Computer Assisted Language Learning, 35(4), 605–634. https://doi.org/10.1080/09588221.2020.1743323.

Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: A mixed methods intervention study. Smart Learning Environments, 11, 9. https://doi.org/10.1186/s40561-024-00295-9.

Moqbel, M. S., & Al-Kadi, A. (2023). Foreign language learning assessment in the age of ChatGPT: A theoretical account. Journal of English Studies in Arabia Felix, 2(1), 71 84. https://doi.org/10.56540/jesaf.v2i1.62.

O’Neil, R., & Tussell, A. M. (2019). Stop! Grammar time: University students’ perceptions of the automated feedback program Grammarly. Australasian Journal of Educational Technology, 35(1), 42-56. https://doi.org/10.14742/ajet.3795.

Ranalli, J., & Yamashita, T. (2022). Automated written corrective feedback: Error- correction performance and timing of delivery. Language Learning & Technology, 26(1). https://doi.org/10.1080/09588221.2018.1428994.

Stevenson, M. (2016). A critical interpretive synthesis: The integration of automated writing evaluation into classroom writing instruction. Computers and Composition, 42(1), 1-16. https://doi.org/10.1016/j.compcom.2016.05.001.

Tang, J., & Rich, C. (2017). Automated writing evaluation in an EFL setting: Lessons from China. The JALT CALL Journal, 13(2), 117-146. https://doi.org/10.29140/jaltcall.v13n2.215.  

Tardy, C. M. (2025). Teaching second language academic writing. Cambridge University Press. https://doi.org/10.1017/9781009638326.

Taşkıran, A. (2022). AI-based automated writing evaluation for online language learning: perceptions of distance learners. Kocaeli University Journal of Education, 5(1), 111-129. doi.org/10.33400/kuje.1053862.

Thi, N. K., & Nikolov, M. (2022). How teacher and Grammarly feedback complement one another in Myanmar EFL students’ writing. Asia-Pacific Education Researcher, 31, 767–779. https://doi.org/10.1007/s40299-021-00625-2.

Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact (2nd ed.). Wiley-Blackwell.

Weigle, S. (2013). English as a second language writing and automated essay evaluation. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 36-54). New York & London: Routledge.

Wilkinson, L., Bouma, G.D. &  Carland, S. (2019). The research process (4th ed). Oxford University Press Canada.

Wilson, J., & Roscoe, R. (2020).  Automated writing evaluation and feedback: Multiple  metrics of efficacy. Journal of Educational Computing Research, 58(1) 87-125. DOI: 10.1177/0735633119830764.

Yousofi, R. (2022). Grammarly deployment (in)efficacy within EFL academic writing classrooms: An attitudinal report from Afghanistan. Cogent Education9(1). https://doi.org/10.1080/2331186X.2022.2142446.

Zare, J., Al-Issa, A., & Ranjbaran Madiseh, F. (2025). Interacting with ChatGPT in essay writing: A study of L2 learners’ task motivation. ReCALL, 1–18. doi:10.1017/S0958344025000035.

Zenebe, B. B. (2017). Learning writing with new perspective-variations in students’ perceptions and past instructional practices: An activity theory-oriented case study. The Internet Journal Language, Culture and Society, 43, 24-34.URL: http://aaref.com.au/en/publications/journal/.

Zhang, Z., & Hyland, K. (2018). Student engagement with teacher and automated feedback on L2 writing. Assessing Writing, 36(1), 90-102. https://doi.org/10.1016/j.asw.2018.02.004.