Augmented Reality (AR) offers powerful new ways to teach complex subjects by embedding data visualizations directly into the physical learning environment. This approach fosters collaborative and embodied learning experiences. Despite its potential, the adoption of AR in education faces a critical challenge: the content creation bottleneck. The development of high-quality, interactive AR content requires specialized programming and design skills that most educators do not possess, creating a significant barrier to its use in the classroom.
To address this research gap, this paper presents the design and theoretical justification of an autonomous AI agent capable of automatically generating interactive educational data visualizations in AR. The proposed cognitive architecture leverages a Large Language Model (LLM) as its core reasoning engine, enabling it to understand high-level pedagogical goals from user prompts. The agent’s decision-making process is governed by an orchestration layer based on the ReAct (Reason and Act) framework, which facilitates robust, multi-step task decomposition and planning.
To ensure that the generated visualizations are not only technically correct but also pedagogically sound, the architecture incorporates two key features. First, a Retrieval-Augmented Generation (RAG) mechanism provides the agent with access to a curated Data Store of expert knowledge on visualization principles and educational best practices. Second, an internal «Critic» module provides a loop for iterative validation and self-correction of the agent’s decisions. The primary contribution of this work is a novel paradigm that automates the end-to-end process of creating immersive educational materials, thus democratizing access to AR technology for non-expert educators.
Keywords: autonomous agent, generative AI, data visualization, augmented reality, AR, immersive technologies, education, RAG, ReAct.
doi: 10.32403/1998-6912-2025-1-70-175-184
- 1. Garzón, J., & Acevedo, J. (2019). Meta-analysis of the impact of augmented reality on studentsʼ learning gains. Educational Research Review, 27, 244–260.
- 2. Velushchak, M. Ya., Harachkovskyi, O. I., & Vasylenko, O. V. (2025). Vykorystannia dopovnenoi realnosti v osvitnomu protsesi zakladiv vyshchoi osvity Ukrainy. Akademichni vizii, (42). (in Ukrainian)
- 3. Lytvynova, S. H., Nosenko, Yu. H., Rashevska, N. V., Sokoliuk, O. M., Slobodianyk, O. V., & Sukhikh, A. S. (2024). Imersyvni tekhnolohii v osvitnomu protsesi: bibliohrafichnyi pokazhchyk prats naukovtsiv Instytutu tsyfrovizatsii osvity NAPN Ukrainy. (Yu. H. Nosenko, Ed.). Kyiv: ITsO NAPN Ukrainy. (in Ukrainian).
- 4. Lytvynova, S. H., Nosenko, Yu. H., Rashevska, N. V., et al. (2024). Vykorystannia imersyvnykh tekhnolohii vchyteliamy u protsesi zmishanoho navchannia v zakladakh zahalnoi serednoi osvity: metodychni rekomendatsii (Yu. H. Nosenko, Gen. Ed.). Kyiv: ITsO NAPN Ukrainy. (in Ukrainian).
- 5. Shin, S., Batch, A., Butcher, P. W. S., Ritsos, P. D., & Elmqvist, N. (2023). The reality of the situation: A survey of situated analytics. IEEE Transactions on Visualization and Computer Graphics, 30(8), 5147–5164.
- 6. Ekanayake, I., & Gayanika, S. (2022). Data Visualization Using Augmented Reality for Education: A Systematic Review. In 2022 7th International Conference on Business and Industrial Research (ICBIR) (pp. 533–537). IEEE.
- 7. Batch, A., Butcher, P. W., Ritsos, P. D., & Elmqvist, N. (2023). Wizualization: A “hard magic” visualization system for immersive and ubiquitous analytics. IEEE Transactions on Visualization and Computer Graphics, 30(1), 507–517.
- 8. Tong, W., Shigyo, K., Yuan, L. P., Fan, M., Pong, T. C., Qu, H., & Xia, M. (2024). Vistellar: Embedding data visualization to short-form videos using mobile augmented reality. IEEE Transactions on Visualization and Computer Graphics, 31(3), 1862–1874.
- 9. Zhao, Y., Zhang, Y., Zhang, Y., Zhao, X., Wang, J., Shao, Z., & Chen, S. (2024). Leva: Using large language models to enhance visual analytics. IEEE Transactions on Visualization and Computer Graphics, 31(3), 1830-1847.
- 10. Chen, J., Wu, J., Guo, J., Mohanty, V., Li, X., Ono, J. P., & Liu, D. (2025). Interchat: Enhancing generative visual analytics using multimodal interactions. arXiv preprint arXiv:2503.04110.
- 11. Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., & Zhang, J. (2024). A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6), 186345.
- 12. Summerville, A., Snodgrass, S., Guzdial, M., Holmgård, C., Hoover, A. K., et al. (2018). Procedural content generation via machine learning (PCGML): A survey. IEEE Transactions on Games, 10(3), 257–270.
- 13. Lytovchenko, O. V., Mozil, B. I., & Dumanskyi, I. I. (2025). Imersyvni tekhnolohii vizualizatsii danykh u osvitnomu protsesi: klasyfikatsiia ta kryterii efektyvnosti. Nauka i tekhnika sohodni, 6(47), 1339–1351. (in Ukrainian).
- 14. Hileta, I. V., Karnaukhov, V. A., Kiselyk, R. O., & Komynar, T. N. (2025). Substraktnyi analiz hrafichnoho interfeisu. Nauka i tekhnika sohodni, 5(46), 1366–1374. (in Ukrainian).