Author(s) | Collection number | Pages | Download abstract | Download full text |
---|---|---|---|---|
Плахтина З. І., Selmenska Z. M. | № 2 (69) | 91-101 |
This research presents an innovative approach to developing intelligent content moderation systems for electronic publications through the integration of Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) architecture. The study addresses critical challenges in automated content moderation, focusing on the detection of misinformation, manipulative content, and potentially harmful materials. The proposed system combines the contextual understanding capabilities of LLM with RAG’s ability to access and utilize current information, creating a more accurate and adaptable moderation solution.
The research examines the practical aspects of implementing such systems in modern electronic publications and analyzes the results of real-world testing. The methodology includes a comprehensive evaluation of system performance across various content types, demonstrating significant improvements in moderation accuracy and efficiency. Special attention is paid to the system’s self-learning capabilities and its ability to adapt to new types of content and threats.
The paper also explores the economic efficiency of implementing automated moderation systems, presenting data on operational cost reduction and improvement in publication workflow. The results show substantial reduction in manual moderation requirements while maintaining high accuracy standards, particularly in detecting complex violation cases such as hidden advertising and sophisticated forms of misinformation. The findings contribute to the ongoing development of content management technologies and offer practical solutions for modern digital publishing challenges.
The system described in this paper represents a significant advancement in content moderation technology, offering both theoretical insights and practical applications for the digital publishing industry. Its implementation demonstrates the potential for improving content quality and safety in the modern information space while maintaining operational efficiency.
Keywords: content moderation, automated moderation systems, machine learning algorithms, large language models (LLM), RAG architecture, hybrid moderation systems, semantic text analysis, fact-checking.
doi: 10.32403/1998-6912-2024-2-69-82-90