Deduplicated
Kafka Streams for ClickHouse
Clean Data. No maintenance. Less load for ClickHouse.

What Sets It Apart
Features built for ease of use
Deduplication with one click.
Select the columns as primary keys and enjoy a fully managed processing without the need to tune memory or state management.

7 days deduplication checks.
Auto detection of duplicates within 7 days after setup to ensure your data is always clean and storage is not exhausted.

Batch Ingestions built for ClickHouse.
Select from ingestion logics like auto, size based or time window based.

Joins, simplified.
Define the fields of the streams that you would like to join and GlassFlow handles execution and state management automatically.

Stateful Processing.
Built-in lightweight state store enables low-latency, in-memory deduplication and joins with context retention within the selected time window.

Managed Kafka and ClickHouse Connector.
Built and updated by GlassFlow team. Data inserts with a declared schema and schemaless.

Auto Scaling of Workers.
Our Kafka connector will trigger based on partitions new workers and make sure that execution runs efficient.

Why You Will Love It
Simple Pipeline
With GlassFlow, you remove workarounds or hacks that would have meant countless hours of setup, unpredictable maintenance, and debugging nightmares. With managed connectors and a serverless engine, it offers a clean, low-maintenance architecture that is easy to deploy and scales effortlessly.


Accurate Data Without Effort
You will go from 0 to a full setup in no time! You get connectors that retry data blocks automatically, stateful storage, and take care of late-arriving events built in. This ensures that your data ingested into ClickHouse is clean and immediately correct.
Less load for ClickHouse
Because of removing duplicates and executing joins before ingesting to ClickHouse, it reduces the need for expensive operations like FINAL or JOINs within ClickHouse. This lowers storage costs, improves query performance, and ensures ClickHouse only handles clean, optimized data instead of redundant or unprocessed streams.

Learn More About ClickHouse Optimization
Building a Next-Gen Observability Stack with ClickStack and GlassFlow
Open source observability with ClickStack and GlassFlow.
ClickHouse Deduplication with ReplacingMergeTree: How It Works and Limitations
ReplacingMergeTree deduplication in ClickHouse – and its limitations.
Load Test GlassFlow for ClickHouse: Real-Time Deduplication at Scale
Benchmarking GlassFlow: Fast, reliable deduplication at 20M events
Frequently asked questions
Feel free to contact us if you have any questions after reviewing our FAQs.
ClickHouse merging process is happening in the background and controlled via ClickHouse. That makes deduplication for streaming data nearly impossible without overspending and slow performance. ClickHouse recommends to reduce the usage of JOINs as it can slow down the system too much. Kafka lacks native deduplication and JOIN capabilities. It just stores events. You need a processing layer in between that handles both deduplication and stateful JOINs before data hits ClickHouse. You can learn more about the challenges from our blog post.
Yes! GlassFlow for ClickHouse is open-source under the Apache 2.0 license. You’re free to use, modify, and self-host it.
Currently, Kafka is the primary input. Support for additional streaming sources like Kinesis and Pub/Sub is on our roadmap. Reach out if you have specific needs via contact form.
As GlassFlow for ClickHouse is running completely locally on your machine, we do not have any access to your data.
GlassFlow is built for event-based data, especially JSON or structured logs. It is ideal for analytics, telemetry, and operational metrics.
We are currently working on it. If you want to have early access or a chat, feel free to reach out to us.
You can talk to the team via our contact us page and our Slack channel.
Cleaned Kafka Streams for ClickHouse
Clean Data. No maintenance. Less load for ClickHouse.
GitHub Repo