Your Hands-on RAG Journey
By now we know what RAG is and we’ve seen the architecture diagram above. It’s a simple implementation of a Basic RAG service that helps you build LLM applications.
Now we’ll build our projects in 4 parts.
- You'll first build a RAG project using Open AI APIs. We'll then help you use APIs by Gemini, Replicate, etc. for the same basic RAG use-case.
- Next, you'll see how you can APIs as a data source to build a RAG project.
- After that, you'll see how you can build RAG projects using open source models, so all the data stays within an enterprise itself. You'll combine it with the use of an Adaptive RAG technique that reduces the costs by 4x without affecting the accuracy.
- Lastly, you'll see an example where we pick LlamaIndex or Langchain along with Pathway as a vector store / retriever.
In all the cases, we'll leverage Pathway – the world's fastest data processing engine and a framework that helps you in-memory incremental vector index that is production, easy-to-scale, and open source!
Let’s get started with the first one.