pw.xpacks.llm.question_answering

class pw.xpacks.llm.question_answering.BaseRAGQuestionAnswerer(llm, indexer, *, default_llm_name=None, short_prompt_template=<pathway.internals.udfs.UDFFunction object>, long_prompt_template=<pathway.internals.udfs.UDFFunction object>, summarize_template=<pathway.internals.udfs.UDFFunction object>)

[source]
Builds the logic and the API for basic RAG application.

Base class to build RAG app with Pathway vector store and Pathway components. Gives the freedom to choose between two question answering strategies, short (concise), and long (detailed) response, that can be set during the post request. Allows for LLM agnosticity with freedom to choose from proprietary or open-source LLMs.

  • Parameters
    • llm (UDF) – LLM instance for question answering. See https://pathway.com/developers/api-docs/pathway-xpacks-llm/llms for available models.
    • indexer (VectorStoreServer) – Indexing object for search & retrieval to be used for context augmentation.
    • default_llm_name (str | None) – Default LLM model to be used in queries, only used if model parameter in post request is not specified. Omitting or setting this as None will require model to be specified in each request.
    • short_prompt_template (UDF) – Template for document question answering with short response. A pw.udf function is expected. Defaults to pathway.xpacks.llm.prompts.prompt_short_qa.
    • long_prompt_template (UDF) – Template for document question answering with long response. A pw.udf function is expected. Defaults to pathway.xpacks.llm.prompts.prompt_qa.
    • summarize_template (UDF) – Template for text summarization. Defaults to pathway.xpacks.llm.prompts.prompt_summarize.

Example:

import pathway as pw  
from pathway.xpacks.llm import embedders, splitters, llms, parsers  
from pathway.xpacks.llm.vector_store import VectorStoreServer  
from pathway.udfs import DiskCache, ExponentialBackoffRetryStrategy  
from pathway.xpacks.llm.question_answering import BaseRAGQuestionAnswerer  
my_folder = pw.io.fs.read(
    path="/PATH/TO/MY/DATA/*",  # replace with your folder
    format="binary",
    with_metadata=True)  
sources = [my_folder]  
app_host = "0.0.0.0"  
app_port = 8000  
parser = parsers.ParseUnstructured()  
text_splitter = splitters.TokenCountSplitter(max_tokens=400)  
embedder = embedders.OpenAIEmbedder(cache_strategy=DiskCache())  
vector_server = VectorStoreServer(  
    *sources,
    embedder=embedder,
    splitter=text_splitter,
    parser=parser,
)
DEFAULT_GPT_MODEL = "gpt-3.5-turbo"
chat = llms.OpenAIChat(  
    model=DEFAULT_GPT_MODEL,
    retry_strategy=ExponentialBackoffRetryStrategy(max_retries=6),
    cache_strategy=DiskCache(),
    temperature=0.05,
)
app = BaseRAGQuestionAnswerer(  
    llm=chat,
    indexer=vector_server,
    default_llm_name=DEFAULT_GPT_MODEL,
)
app.build_server(host=app_host, port=app_port)  
app.run_server()

build_server(host, port, **rest_kwargs)

sourceAdds HTTP connectors to input tables, connects them with table transformers.

pw_ai_query(pw_ai_queries)

sourceMain function for RAG applications that answer questions based on available information.

run_server(with_cache=True, cache_backend=<pathway.persistence.Backend object>)

sourceStart the app with cache configs. Enabling persistence will cache the embedding, and LLM requests between the runs.

summarize_query(summarize_queries)

sourceFunction for summarizing given texts.

pw.xpacks.llm.question_answering.answer_with_geometric_rag_strategy(questions, documents, llm_chat_model, n_starting_documents, factor, max_iterations, strict_prompt=False)

sourceFunction for querying LLM chat while providing increasing number of documents until an answer is found. Documents are taken from documents argument. Initially first n_starting_documents documents are embedded in the query. If the LLM chat fails to find an answer, the number of documents is multiplied by factor and the question is asked again.

  • Parameters
    • questions (ColumnReference[str]) – Column with questions to be asked to the LLM chat.
    • documents (ColumnReference[list[str]]) – Column with documents to be provided along with a question to the LLM chat.
    • llm_chat_model (UDF) – Chat model which will be queried for answers
    • n_starting_documents (int) – Number of documents embedded in the first query.
    • factor (int) – Factor by which a number of documents increases in each next query, if an answer is not found.
    • max_iterations (int) – Number of times to ask a question, with the increasing number of documents.
    • strict_prompt (bool) – If LLM should be instructed strictly to return json. Increases performance in small open source models, not needed in OpenAI GPT models.
  • Returns
    A column with answers to the question. If answer is not found, then None is returned.

Example:

import pandas as pd
import pathway as pw
from pathway.xpacks.llm.llms import OpenAIChat
from pathway.xpacks.llm.question_answering import answer_with_geometric_rag_strategy
chat = OpenAIChat()
df = pd.DataFrame(
    {
        "question": ["How do you connect to Kafka from Pathway?"],
        "documents": [
            [
                "`pw.io.csv.read reads a table from one or several files with delimiter-separated values.",
                "`pw.io.kafka.read` is a seneralized method to read the data from the given topic in Kafka.",
            ]
        ],
    }
)
t = pw.debug.table_from_pandas(df)
answers = answer_with_geometric_rag_strategy(t.question, t.documents, chat, 1, 2, 2)

pw.xpacks.llm.question_answering.answer_with_geometric_rag_strategy_from_index(questions, index, documents_column, llm_chat_model, n_starting_documents, factor, max_iterations, metadata_filter=None, strict_prompt=False)

sourceFunction for querying LLM chat while providing increasing number of documents until an answer is found. Documents are taken from index. Initially first n_starting_documents documents are embedded in the query. If the LLM chat fails to find an answer, the number of documents is multiplied by factor and the question is asked again.

  • Parameters
    • questions (ColumnReference[str]) – Column with questions to be asked to the LLM chat.
    • index (DataIndex) – Index from which closest documents are obtained.
    • documents_column (str | ColumnReference) – name of the column in table passed to index, which contains documents.
    • llm_chat_model (UDF) – Chat model which will be queried for answers
    • n_starting_documents (int) – Number of documents embedded in the first query.
    • factor (int) – Factor by which a number of documents increases in each next query, if an answer is not found.
    • max_iterations (int) – Number of times to ask a question, with the increasing number of documents.
    • strict_prompt (bool) – If LLM should be instructed strictly to return json. Increases performance in small open source models, not needed in OpenAI GPT models.
  • Returns
    A column with answers to the question. If answer is not found, then None is returned.