We had a great webinar this past week with Mike Cizmar and Judah Phillips. One topic stood out: “How to plan for the jump from POC to production.”
A Simple Demo Isn’t Always Simple to Scale
Let’s take a popular example.
Many AI products today offer bots that answer questions “from a PDF.” The demo looks great. The LLM quickly answers questions using the information provided.
But what happens when you want to answer more than a few simple questions? And what if you have built your own dynamically powered website? Will the bot be able to deal with the navigation and advertising links?
Assuming the bot can read the webpages, now you have to collect the information from multiple locations and put those results into a format the LLM can understand. A PDF, in this case. And upload it to the LLM.
And then update it when the underlying information changes.
Suddenly, the time you hoped would be freed up is spent managing a new curation and publishing workflow that quickly becomes laborious and error prone.
How SWIRL Solves This Problem
Recently, a financial think tank replaced their PDF-trained chatbot with SWIRL.
Instead of creating new PDFs for every bot you want to have answering questions, SWIRL’s AI Co-Pilot answers questions by searching relevant internal sources – like Sharepoint sites, Box collections and password-protected websites – and generates relevant results using Retrieval Augmented Generation (RAG).
No special PDF handling required.
Also, no delays. When a system of record is updated, the next response from the LLM will reflect the change.