Optimization of interaction with neural networks through RAG and LLM integration

Author(s) Collection number Pages Download abstract Download full text
Поліщук В. А., Myklushka I. Z. № 1 (68) 46-53 Image Image

This article introduces a method for managing artificial intelligence (AI) models by leveraging Retrieval-Augmented Generation (RAG) as a central mediator. This approach optimizes the interaction between users and various neural networks, utilizing advanced linguistic processing technologies integrated with sophisticated search algorithms. This synergy ensures efficient organization and retrieval of relevant data, facilitating the generation of textual and other forms of content by AI systems. The methodology enhances the interface between AI systems and users, making it more adaptable and user-friendly for complex task management. By incorporating RAG, these systems can effectively utilize existing data, allowing for dynamic adaptations to changing con­ditions in real time. This not only improves the quality of interactions but also boosts the efficiency of processing requests, thereby expanding the potential for both creative and analytical applications across various fields.

The article also explores the potential of combining RAG with Large Language Models (LLMs) like GPT. This combination aims to develop multifunctional AI systems that enhance accuracy and adaptability, facilitating smoother interactions between humans and machines. The integration of RAG with LLMs promises to refine the responsiveness of AI systems to user inputs, making them more intuitive and capable of handling nuanced interactions. The text elaborates on the scalability and adaptability of RAG technologies, which are critical for the development and future evolution of intelligent systems. Using centralized management through APIs, RAG-enabled systems support modular integration of services, which simplifies the incorporation of new functionalities and enhances the overall system flexibility. This approach reduces the entry threshold for users and developers, fostering an environment where continuous improvement and customization are feasible. The deployment of RAG in AI systems exemplifies a significant advancement in AI technology, emphasizing its potential to trans­form the landscape of AI interactions and system design.

Keywords: neural networks, large language models (LLM), mediator between models, optimization of interaction, contextual search, efficiency of AI systems, modular integration, content generation, artificial intelligence (AI), adaptive systems, process automation, Retrieval-Augmented Generation (RAG), response personalization, technology integration.

doi: 10.32403/1998-6912-2024-1-68-46-53


  • 1. Yunfan, Gao, Yun, Xiong, Xinyu, Gao, Kangxiang, Jia, Jinliu, Pan, Yuxi, Bi, Yi, Dai, Jiawei, Sun, Qianyu, Guo, Meng, Wang, & Haofen, Wang. (Feb 27, 2024). Retrieval-Augmented Generation for Large Language Models: A Survey. 2−3 (in English).
  • 2. Kyle, Kirwan. (January 15, 2024). The Rise of RAG-Based LLMs in 2024. 2 (in English).
  • 3. Demiao, Lin. (23 Jan 2024). Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition (in English).
  • 4. Yunfan, Gao, Yun, Xiong, Xinyu, Gao, Kangxiang, Jia, Jinliu, Pan, Yuxi, Bi, Yi, Dai, Jiawei, Sun, Qianyu, Guo, Meng, Wang, & Haofen, Wang. (27 Mar, 2024). Retrieval-Augmented Generation for Large Language Models: A Survey (in English).
  • 5. Alon, U., Xu, F., He, J., Sengupta, S., Roth, D., & Neubig, G. (2022). Neurosymbolic language modeling with automaton-augmented retrieval, in International Conference on Machine Learning. PMLR (in English).
  • 6. An Open-Source Tool to Build Your Own RAG Retrieval Augmented Generation Pipeline and Utilize LLMs for Internal-Based Outputs. Retrieved from
  • 7. https://www.marktechpost.com/2023/09/11/meet-verba-an-open-source-tool-to-build-your-own-rag-retrieval-augmented-generation-pipeline-and-utilize-llms-for-internal-based-outputs/ (in English).