Software companies can reduce churn and increase engagement by adding generative AI. However, challenges such as lack of data, security concerns, and integration with existing processes can make it difficult. When successfully embedded, AI can create differentiation and unlock various use cases.
Swirl, an open-source tool released on GitHub under Apache 2.0, allows for quick extension of existing applications. It sends user queries to multiple sources (including the host application), gathers responses asynchronously, and re-ranks results using a built-in Large Language Model (LLM).
As noted by Raman Ramanenkou of Sense “Setting up and running Swirl in a Docker container is incredibly straightforward—it takes just a few minutes. Configuring all the connectors is a breeze, enabling you to seamlessly search and retrieve information from various sources. Swirl operates solely within the bounds of authorized access, ensuring data privacy.”
ChatGPT has incredible capabilities that unlock a wide variety of use cases — from insight extraction to question-answering to general workflow automation. The challenge is getting ChatGPT to understand your business and consider the variety of data sources.
For applications that are looking to add generative AI to their application, Swirl Metapipe enables Retrieval Augmented Generation (RAG). Swirl Metapipe enables applications to dynamically search and retrieve information from multiple sources, including application and internal data. Take results from various sources, process, and “feed” back to ChatGPT. By integrating Swirl Metapipe, applications can quickly and easily add generative AI.
By integrating Swirl Metapipe, businesses can use existing generative AI language models as reasoning engines and leverage Swirl search to feed the language model relevant data as a basis for its answers. Embedding Swirl Metapipe will empower users to answer any question using application data.
Contact hello@swirl.today to get started…