Versatility and Efficiency in RAG

To further equip you with a comprehensive understanding of RAG and its adaptability, let's address some frequently asked questions (FAQs) and challenges that developers often encounter. A couple of them were asked to us during this bootcamp.

1. Which data types can RAG handle?

One of the most compelling features of RAG and frameworks such as the Langchain, LlamaIndex, or Pathway LLM App that we will use going forward is their flexibility to work with an array of data types. Whether it's relational databases, APIs, transcribed audio, or even live feeds from the internet, RAG can seamlessly integrate these into its retrieval mechanism. This adaptability enhances the model's ability to generate contextually accurate and informative responses.

  • Data types supported by these popular frameworks: Relational databases, free-form text, PDFs, APIs, transcribed audio, and streaming platforms like Kafka, Debezium, Redpanda, etc.

2. Are Vector Databases necessary for RAG?

Short answer – no. Let's understand.

If you don't know, a Vector Database is a specialized database that handles vector embeddings. These databases are primarily used to efficiently store, search, and retrieve vectors for their use cases in LLMs and Recommender Systems. Just like the emergence of foundational Large Language Models (LLMs) or what some call the "AI Wave," Retrieval Augmented Generation (RAG) is also a relatively new concept. It gained prominence after the 2020 publication "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" by a team at Facebook AI Research.

Initially, vector embeddings led developers to naturally consider the application of dedicated vector databases – frequently seen in Recommender Systems.

However, this understanding is quickly evolving.

  • Misconception about Vector Databases: It's a common belief that vector databases are essential for RAG. While libraries for LLMs, such as LLM App, are compatible with renowned vector databases (for example – Pinecone, Weaviate, etc.,) these are not mandatory components.
  • Enterprise Challenges: Introducing any new database into an enterprise environment comes with its own set of complexities and challenges, making a simpler solution preferable in many instances.
  • Built-in Real-time Indexing within Specific Frameworks: Tools like Pathway have capabilities to generate and manage their real-time own vector indexes, negating the need for a separate vector database. Additionally, most conventional databases like PostgreSQL are expanding their features to include built-in support for vector indexing, thanks to extensions like PG Vector.

So, while the allure of vector databases exists, it's critical to understand that they are not the only path to efficient RAG implementation.

3. Is a Separate Real-Time Processing Framework Needed for a Real-time Stream of Data?

Good news – no. Under the hood, the LLM App uses Pathway, an ultra-performant data processing engine (2023 benchmarks) which is suitable for both batch and streaming (live data) use cases. But how does it manage stream data processing related use-cases efficiently? Check out the Bonus Resource ahead on incremental indexing linked to the Bonus Module on kNN+LSH.

4. Is the ChatGPT Plugin for Bing Search an example of RAG?

Some may wonder whether certain web search plugins in LLM applications like ChatGPT utilize a Retrieval-Augmented Generation (RAG) approach. For example, a ChatGPT interface with a web search plugin can pull current information from the internet, providing a more accurate and up-to-date response.

Example: This plugin lets the model answer questions about things that happened after it was last trained, by retrieving information from a blog on the internet.

In a way, these can be considered LLM applications that employ a retrieval-augmented generation strategy. However, they do not retrieve data from a diverse set of data sources, such as a collection of PDFs, links, or Kafka topics. This affects the quality of their responses. In such scenarios, LLMs like ChatGPT can only offer the best answer they find from the specific webpage that the plugin accessed.

On the other hand, when you incorporate RAG into your custom LLM application, you benefit from efficient vector embeddings and vector search capabilities. These allow you to extract much more relevant information from a comprehensive data corpus, aiding in the identification of the most pertinent answers. When used with something like the Pathway LLM xpack optimized for real-time data streams (kudos to Pathway's engine under the hood), you're looking at retrieving the most relevant information from a real-time vector index. Isn't that fascinating? 🤩

By understanding these facets, you're better equipped to leverage the strengths of RAG in your LLM architecture, whether for an enterprise solution or a personal project.