Applications of large language models for digital educational platforms
https://doi.org/10.25682/NIISI.2025.2.0001
Abstract
This paper explores the potential of large language models (LLMs) to enhance interactions between students and educators within digital educational platforms. It analyzes modern state-of-the-art solutions – such as YandexGPT, Mistral, Qwen, LLaMA, and their variants – along with their architectural features, performance, and adaptability for educational tasks. The study demonstrates that proper model parameter tuning enables their effective use in automating routine tasks, personalizing learning, and expanding instructors' toolsets.
About the Authors
A. G. LeonovRussian Federation
N. S. Martynov
Russian Federation
K. A. Mashchenko
Russian Federation
M. S. Paremuzov
Russian Federation
K. K. Pchelin
Russian Federation
A. V. Shlyakhov
Russian Federation
References
1. Chu, Z., Wang, S., Xie, J., Zhu, T., Yan, Y., Ye, J., Zhong, A., Hu, X., Liang, J., Yu, P.S. and Wen, Q., 2025. Llm agents for education: Advances and applications. arXiv preprint arXiv:2503.11733.
2. Silva, P. and Costa, E., 2025. Assessing large language models for automated feedback generation in learning programming problem solving. arXiv preprint arXiv:2503.14630.
3. Yousef, M., Mohamed, K., Medhat, W. et al. BeGrading: large language models for enhanced feedback in programming education. Neural Comput & Applic 37, 1027–1040 (2025). https://doi.org/10.1007/s00521-024-10449-y.
4. Chen, K., Zhou, X., Lin, Y., Feng, S., Shen, L. and Wu, P., 2025. A Survey on Privacy Risks and Protection in Large Language Models. arXiv preprint arXiv:2505.01976.
5. Xunyu Zhu, Jian Li, Yong Liu, Can Ma, Weiping Wang; A Survey on Model Compression for Large Language Models. Transactions of the Association for Computational Linguistics 2024; 12 1556–1577. doi: https://doi.org/10.1162/tacl_a_00704.
6. LLM YandexGPT 5: [сайт]. – URL https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain (дата обращения 11.08.25)
7. Lo, K.M., Huang, Z., Qiu, Z., Wang, Z. and Fu, J., 2024. A closer look into mixture-of-experts in large language models. arXiv preprint arXiv:2406.18219.
8. LLM Mistral: [сайт]. – URL https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506 (дата обращения 11.08.25)
9. LLM Qwen2.5-Coder: [сайт]. – URL https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct (дата обращения 11.08.25)
10. LLM Llama-3.1: [сайт]. – URL https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct (дата обращения 11.08.25)
11. Kudo, T. & Richardson, J. (2018), SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing., in Eduardo Blanco & Wei Lu, ed., 'EMNLP (Demonstration)' , Association for Computational Linguistics, , pp. 66-71.
12. LLM Seed-Coder: [сайт]. – URL https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct (дата обращения 11.08.25)
13. Rafailov, R., Sharma, A., Mitchell, E., Manning, C.D., Ermon, S. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. Advances in neural information processing systems, 36, pp.53728-53741.
14. Chen, Q., Qin, L., Liu, J., Peng, D., Guan, J., Wang, P., Hu, M., Zhou, Y., Gao, T. and Che, W., 2025. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. arXiv preprint arXiv:2503.09567.
15. Thüs D, Malone S and Brünken R (2024) Exploring generative AI in higher education: a RAG system to enhance student engagement with scientific literature. Front. Psychol. 15:1474892. doi: 10.3389/fpsyg.2024.1474892
16. Fan, W., Ding, Y., Ning, L., Wang, S., Li, H., Yin, D., Chua, T.S. and Li, Q., 2024, August. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining (pp. 6491-6501).
17. Byun, G. and Choi, J.D., 2025. D-GEN: Automatic Distractor Generation and Evaluation for Reliable Assessment of Generative Model. arXiv preprint arXiv:2504.13439.
18. Bitew, S.K., Deleu, J., Develder, C. and Demeester, T., 2023, September. Distractor generation for multiple-choice questions with predictive prompting and large language models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 48-63). Cham: Springer Nature Switzerland.
Review
For citations:
Leonov A.G., Martynov N.S., Mashchenko K.A., Paremuzov M.S., Pchelin K.K., Shlyakhov A.V. Applications of large language models for digital educational platforms. SRISA Proceedings. 2025;15(2):9-15. (In Russ.) https://doi.org/10.25682/NIISI.2025.2.0001