Pathway AI Pipelines REST API
To seamlessly integrate into your workflow, Pathway AI Pipelines include a robust REST API. This REST API is the primary interface for interacting with the deployed AI Pipelines.
The API allows you to easily query the RAG and retrieve the generated results, making it simple to connect the AI Pipelines to other components of your application via simple HTTP requests.
The REST API provides the following endpoints:
Type | Endpoint | Description | |
---|---|---|---|
Retrieve Closest Documents | POST | /v1/query | Retrieve the closest documents from the vector store based on a query. |
Retrieve Vector Store Statistics | GET | /v1/statistics | Retrieve statistics about the vector store. |
Answer with RAG | POST | /v1/pw_ai_answer | Generate an answer to a query using a RAG pipeline. |
Summarize Texts | POST | /v1/pw_ai_summary | Summarize a list of texts. |
List Indexed Documents | POST | /v1/pw_list_documents | List documents indexed in the vector store. |
Retrieve Closest Documents
Endpoint: POST /v1/query
Retrieve the closest documents from the vector store based on a given query.
Request Body
{
"query": "string",
"k": "integer (optional, default: 3)",
"metadata_filter": "string (optional, default: null)",
"filepath_globpattern": "string (optional, default: null)"
}
query
: The query string to search.k
: The number of results to retrieve (default is 3).metadata_filter
: Optional filter for metadata fields.filepath_globpattern
: Optional glob pattern to filter file paths.
Note: A null value means no filters will be applied.
Response
A list of the closest matching documents.
Retrieve Vector Store Statistics
Endpoint: GET /v1/statistics
Retrieve statistical information about the vector store, such as file counts and timestamps.
Response
{
"file_count": "integer",
"last_modified": "integer (UNIX timestamp)",
"last_indexed": "integer (UNIX timestamp)"
}
Answer with RAG
Endpoint: POST /v1/pw_ai_answer
Provide a response to a query using a RAG pipeline.
Request Body
{
"prompt": "string",
"filters": "string (optional, default: null)",
"model": "string (optional, default: null)"
}
prompt
: The input question or prompt.filters
: Optional metadata filter for narrowing document retrieval.model
: Optional LLM to customize the response generation.
Response
{
"answer": "string"
}
Summarize Texts
Endpoint: POST /v1/pw_ai_summary
Summarize a list of texts.
Request Body
{
"text_list": ["string", ...],
"model": "string (optional, default: null)"
}
text_list
: List of texts to summarize.model
: Optional LLM model for generating summaries.
Response
{
"summary": "string"
}
List Indexed Documents
Endpoint: POST /v1/pw_list_documents
List documents currently indexed in the vector store, with optional filtering or metadata selection.
Query Parameters
filters
:string (optional)
- Metadata filter for the documents.keys
:list[string] (default: ["path"])
- Metadata fields to include in the response.
Request Body
{
"filters": "string (optional, default: null)",
"keys": ["string", ...] (default: ["path"])
}
Response
[
{
"path": "string",
"last_modified": "string (timestamp)",
...
}
]