In the rapidly evolving domain of artificial intelligence (AI), data is the bedrock upon which intelligent systems are built.
A particularly innovative application of AI in data handling involves the Retriever-Augmented Generation (RAG) model, which provides data to the AI to enhance the quality and relevance of generated content. Traditionally, the method to make external data available to such AI models has relied heavily on the Extract, Transform, and Load (ETL) architecture.
This approach entails aggregating voluminous data into a new repository, typically a vector database, designed to facilitate rapid and efficient data retrieval. However, this conventional methodology presents several significant drawbacks that necessitate a reevaluation of its efficacy and safety.
The Pitfalls of Traditional ETL Processes
While foundational in many data management strategies, the ETL process involves transferring large amounts of data into a centralized store. This operation is both resource-intensive and costly, as it requires substantial computational and storage capabilities. (And you end up paying twice… at least, for the system of record and then the new “repository of intelligence.”)
Most vendors advocate for this method as they often structure their pricing based on the consumption of CPU resources and the amount of data stored, thereby benefiting from the extensive use of system resources. Moreover, from a security perspective, the ETL process poses considerable risks. Aggregating comprehensive datasets into a single repository increases the potential impact of a data breach due to, for example, incorrect re-modeling of the underlying ACLs.
The effectiveness of this approach in enabling AI to work precisely in the enterprise is … questionable. A single silo is always worth considering; if a new repository improves performance on important tasks, it is worth adopting. However, loading multiple datasets into a new database does not guarantee the ability to pinpoint relevant information. On the contrary, it often complicates the decision-making processes when data is provided to the AI as it is forced to sift through more candidates that may appear conflated when removed from their original application and data context.
SWIRL’s Innovative Approach to Data Utilization
At SWIRL, we have pioneered an alternative strategy that avoids the complexity and cost of the traditional ETL model in favor of AI-powered metasearch. Recognizing that the underlying data repositories can already retrieve relevant results, SWIRL uses a Reader Language Model (LLM) to re-rank the results returned from multiple sources – asynchronously, in just a few seconds.
This approach ensures that only the most pertinent data is utilized, as defined by the underlying systems of record – incorporating ontologies, taxonomies, lexicons, and both application and data schemes. This guarantees two things: the AI will get more relevant and less irrelevant data, and the human will be able to compare the results (and subsequently generated answers) to those systems of record.
This is the way to build confidence in AI.
Supporting our stance, the XetHub study compellingly demonstrates that re-ranking “naive” results from a conventional search engine outperforms the traditional method of transferring data into a vector database.
Conclusion: Query, Don’t Load
The insights from our approach and corroborating studies imply a significant shift in how data is integrated into AI applications.
Instead of relying on costly, risky, and inefficient ETL processes, we propose a streamlined methodology: simply query, re-rank, and then RAG.
This minimizes security and operational risks, reduces overhead costs, and enhances the AI system’s ability to deliver precise, relevant, and traceable outputs. By adopting this refined strategy, organizations can safeguard their data more effectively and harness their existing technological infrastructure to create more intelligent and responsive AI systems that actually help users save time (and money).
It is time to move away from outdated data integration practices and towards a more secure, efficient, and intelligent future.