EXPLORING AI-DRIVEN PROGRAMMING EXERCISE GENERATION

EXPLORING AI-DRIVEN PROGRAMMING EXERCISE GENERATION

J. Rytilahti, O. Weerakoon, E. Kaila (2024).  EXPLORING AI-DRIVEN PROGRAMMING EXERCISE GENERATION.

Large language models(LLMs) are transforming how teachers work. In this paper, we observe several experimental approaches to generating software programming exercises by utilizing ChatGPT, a popular and open LLM. The generation of these exercises was tightly connected to a large Python programming course that was targeted at students studying in Information Technology, Software Engineering, and Computing.

We experimented with three separate approaches. In the first one, we generated new programming exercises with a specific topic using theme injection. In the second one, we generated variations of existing programming exercises by changing the theme or content. In the third one, we generated hybrid exercises by injecting original programming exercises with additional topics or other related exercises.

Based on our results, all three approaches showed potential but also revealed limitations. The exercise generation with theme injection can produce fully functional exercises. However, these exercises could appear to students as too generic or erroneous. The exercise variations seem to retain the semantic meaning of the original exercise quite well while still using different context. We also tested the variations in a large introductory programming course and found out that the students could not distinguish them from human-generated exercises in style or quality. The hybrid exercises were built upon the idea of exploring how close we are to fully adaptive learning environments in the field of programming education. The current results of this approach show that we need to do further experimentation to maybe reach the goal.

All in all, it was evident that LLMs can be a useful tool in assisting teachers in generating exercises. Even with certain coherent limitations, they are useful in particular cases. We conclude our article by discussing the future possibilities of LLMs, including but not limited to dynamic, automatically generated exercises and fully adaptive learning environments.

Authors (New): 
Juuso Rytilahti
Oshani Weerakoon
Erkki Kaila
Affiliations: 
University of Turku, Turku, Finland
Keywords: 
AI in education
Large Language Models (LLM)
Programming Education
Pedagogical Tools
ChatGPT
CDIO Standard 2
CDIO Standard 3
CDIO Standard 5
CDIO Standard 8
CDIO Standard 9
CDIO Standard 11
Year: 
2024
Reference: 
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., . . . Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.: 
https://doi.org/10.48550/arXiv.2303.12712
Crawford, J., Cowling, M., & Allen, K.-A. (2023). Leadership is needed for ethical chatgpt: Character, assessment, and learning using artificial intelligence (ai). Journal of University Teaching and Learning Practice. doi: 10.53761/1.20.3.02: 
https://doi.org/10.53761/1.20.3.02
Dave, P. (2023, May 31). Chatgpt is cutting non-english languages out of the ai revolution. https://www.wired.com/story/chatgpt-non-english-languages-ai -revolution/. WIRED.: 
Deng, J., & Lin, Y. (2022). The benefits and challenges of chatgpt: An overview. Frontiers in Computing and Intelligent Systems, 2(2), 81–83.: 
https://doi.org/10.54097/fcis.v2i2.4465
Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.: 
https://doi.org/10.48550/arXiv.2305.14314
Nelson, K., & Creagh, T. (2023). Editorial volume 14 issue 1 2023. Student Success. doi: 10.5204/ssj.2860: 
https://doi.org/10.5204/ssj.2860
OpenAI. (2023). Gpt-4 technical report.: 
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., . . . Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744.: 
https://doi.org/10.48550/arXiv.2203.02155
Sovietov, P. (2022). Automatic generation of programming exercises. arXiv.: 
https://doi.org/10.1109/TELE52840.2021.9482762
Speth, S., Meissner, N., & Becker, S. (2023). Investigating the use of ai-generated exercises for beginner and intermediate programming courses: A chatgpt case study. In Csee&t 2023. (This study explores the use of AI, particularly ChatGPT, in creating programming exercises and assesses their effectiveness in an educational setting.): 
http://dx.doi.org/10.1109/CSEET58097.2023.00030
Team, G., Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., . . . Ahn, J. (2023). Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.: 
https://doi.org/10.48550/arXiv.2312.11805
Wang, K., Singh, R., & Su, Z. (n.d.). Search, align, and repair: Data-driven feedback generation for introductory programming exercises. https://arxiv.org/. Retrieved from https://arxiv .org/pdf/1711.07148.pdf: 
https://doi.org/10.48550/arXiv.1711.07148
Go to top
randomness