Comparison with Flink

Pathway is a Python framework with a unified engine for batch and streaming data processing. Why should you choose Pathway instead of any other existing streaming engines, such as Apache Flink?

To assist you in your choice, here is a blueprint of Pathway features is provided below, together with a comparison to Apache Flink.

Feature Pathway Apache Flink
Processing TypeStream and batch (with the same engine).
Guarantees of same results returned whether running in batch or streaming.
Capacity for asynchronous stream processing and API integration.
Stream and batch (with different engines).
Programming language APIsPython, SQLJVM (Java, Kotlin, Scala), SQL, Python
Programming APITable APIDataStream API and Table API, with partial compatibility
Software integration ecosystems/plugin formats.Python,
C binary interface (C, C++, Rust),
Ease of development
How to QuickStartGet Python.
Do `pip install pathway`.
Run directly.
Get Java.
Download and unpack Flink packages.
Start a local Flink Cluster with `./bin/`.
Use netcat to start a local server.
Submit your program to the server for running.
Running local experiments with dataUse Pathway locally in VS Code, Jupyter, etc.Based on local Flink clusters
CI/CD and TestingUsual CI/CD setup for Python (use GitHub Actions, Jenkins etc.)
Simulated stream library for easy stream testing from file sources.
Based on local Flink cluster integration into CI/CD pipelines
Interactive work possible?Yes, data manipulation routines can be interactively created in notebooks and the Python REPLCompilation is necessary, breaking data-scientist's flow of work
ScalabilityHorizontal* and vertical scaling.
Scales to thousands of cores and terabytes of application state.
Standard and custom libraries (including ML library) are scalable.
Horizontal and vertical scaling.
Scales to thousands of cores and terabytes of application state.
Most standard libraries (including ML library) do not parallelize in streaming mode.
Performance for basic tasks (groupby, filter, single join)Delivers high throughput and low latency.Slower than Pathway in benchmarks.
Transformation chain length in batch computing1000+ transformations possible, iteration loops possibleMax. 40 transformations recommended (in both batch and streaming mode).
Fast advanced data transformation (iterative graph algorithms, machine learning)In batch and streaming mode.No; restricted subset possible in batch mode only.
Parameter tuning requiredInstance sizing only.
Possibility to set window cut-off times for late data.
Considerable tuning required for streaming jobs.
Architecture and deployment
Distributed Deployment (for Kubernetes or bare metal clusters)Pool of identical workers (pods).*
Sharded by data.
Includes a JobManager and pool of TaskManagers.
Work divided by operation and/or sharded by data.
Dataflow handling and communicationEntire dataflow handled by each worker on a data shard, with asynchronous communication when data needs routing between workers.
Backpressure built-in.
Multiple communication mechanisms depending on configuration.
Backpressure handling mechanisms needed across multiple workers.
Internal Incremental Processing ParadigmCommutative
(based on record count deltas)
Primary data structure for stateMulti-temporal Log-structured merge-tree (shared arrangements).
In-memory state.
Log-structured merge-tree.
In-memory state.
State ManagementIntegrated with computation.
Cold-storage persistence layer optional.
Low checkpointing overhead.*
Integrated with computation.
Cold-storage persistence layer optional.
Semantics of stream connectorsInsert / UpsertInsert / Upsert
Message Delivery GuaranteesEnsures exactly-once delivery guarantees for state and outputs (if enabled)Ensures exactly-once delivery guarantees for state and outputs (if enabled)
ConsistencyConsistent, with exact progress tracking. Outputs reflect all data contained in a prefix of the source streams. All messages are atomically processed, if downstream systems have a notion of transaction no intermediate states are sent out of the system.Eventually consistent, with approximate progress tracking using watermarks. Outputs may reflect partially processed messages and transient inconsistent outputs may be sent out of the system.
Processing out-of-order dataSupported by default.
Outputs of built-in operations do not depend on data arrival order (unless they are configured to ignore very late data).
Event times used for windowing and temporal operations.
Supported or fragile, depending on the scenario. Event time processing supported in addition to arrival time and approximate watermarking semantics.
Fault toleranceRewind-to-snapshot.
Partial failover handled transparently in hot replica setups.*
Support for partial failover present or not depending on scheduler.
Monitoring systemPrometheus-compatible endpoint on each pod
Logging systemIntegrates with Docker and Kubernetes Container logs
Machine Learning support
Language of ML library implementationPython / PathwayJVM / Flink
Parallelism support by ML librariesML libraries scale vertically and horizontallyMost ML libraries are not built for parallelization
Supported modes of ML inferenceCPU Inference on worker nodes.
Asynchronous Inference (GPU/CPU).
Alerting of results updates after model change.
CPU Inference on worker nodes.
Supported modes of ML learningAdd data to the training set.
Update or delete data in the training set.
Revise past classification decisions.
Add data to the training set.
Representative real-time Machine Learning libraries.Classification (including kNN), Clusterings, graph clustering, graph algorithms, vector indexes, signal processing.
Geospatial libraries, spatio-temporal data, GPS and trajectories.*
Possibility to integrate external Python real-time ML libraries.
Classification (including kNN), Clusterings, vector indexes.
Support for iterative algorithms (iterate until convergence, gradient descent, etc.)YesNo
API Integration with external Machine Learning models and LLMsYesNo / fragile
Typical Analytics and Machine Learning use casesData fusion
Monitoring and alerting (rule-based or ML-powered)
IoT and logs data observability (rule-based or ML-powered)
Trajectory mining*
Graph learning
Recommender systems
Ontologies and dynamic knowledge graphs.
Real-time data indexing (vector indexes).
LLM-enabled data pipelines and RAG services.
Low-latency feature stores.
Monitoring and alerting (rule-based)
IoT and logs data observability (rule-based)
API and HTTP microservices
REST/HTTP API integrationNon-blocking (Asynchronous API calls) supported in addition to Synchronous calls.Blocking (Synchronous calls)
Acting as microservice hostProvides API endpoint mechanism for user queries.
Supports registered queries (API session mechanism, alerting).
Use as low-latency feature storeYes, standalone. From 1ms latency.Possible in combination with Key-value store like Redis. From 5ms latency.
Requires manual versioning/consistency checks.

* Features only available in the enterprise version of Pathway. See also Feature comparison.