arXiv

Cost-Efficient RAG for Entity Matching with LLMs: A Blocking-based Exploration

Chuangtao Ma, Zeyu Zhang, Arijit Khan
Feb 5, 2026·10:27·2 listens·
00:00
10:27
Retrieval-Augmented Generation (RAG)Entity Matching (EM)BlockingCE-RAG4EMLLMs in Data IntegrationBlocking-based Retrieval

About This Paper

Retrieval-augmented generation (RAG) enhances LLM reasoning in knowledge-intensive tasks, but existing RAG pipelines incur substantial retrieval and generation overhead when applied to large-scale entity matching. To address this limitation, we introduce CE-RAG4EM, a cost-efficient RAG architecture that reduces computation through blocking-based batch retrieval and generation. We also present a unified framework for analyzing and evaluating RAG systems for entity matching, focusing on blocking-aware optimizations and retrieval granularity. Extensive experiments suggest that CE-RAG4EM can achieve comparable or improved matching quality while substantially reducing end-to-end runtime relative to strong baselines. Our analysis further reveals that key configuration parameters introduce an inherent trade-off between performance and overhead, offering practical guidance for designing efficient and scalable RAG systems for entity matching and data integration.

Turn any paper into a podcast

ResearchPod turns research papers into podcasts you can actually follow.

Download on the App Store