Open Source

Deduplicated

Kafka Streams for ClickHouse

Clean Data. No maintenance. Less load for ClickHouse.

Streaming Deduplication Easier Than Ever

Limitations of ClickHouse

Limitations of ClickHouse

Uncontrollable background merges lead to wrong data until the merges are finished. FINAL provides data accuracy with a significant cost to performance and resource usage.

Operational Overhead

Operational Overhead

Kafka lacks native deduplication, requiring custom connectors with state management, adding operational complexity and scaling challenges, often leading to fragile, hard-to-maintain systems.

Not Built for Scale

Not Built for Scale

For high-throughput, mission-critical applications requiring exactly-once guarantees and scalable state management, the native capabilities of ksqlDB are insufficient.

Complex State

Complex State

Implementing exactly-once semantics with a custom Go service can lead to difficulties in state management, handling late events, and achieving scalability.

Effortless Streaming Joins

Not for Real-Time Joins

Not for Real-Time Joins

ClickHouse is not optimized for continuous, low-latency joins on streaming data. It does not offer support for joining late-arriving events, forcing trade-offs between real-time responsiveness and data completeness.

Java and State Handling

Java and State Handling

Building join logic with the Kafka Streams library requires using Java and managing custom state within the JVM, which makes development and maintenance more complex and time-consuming.

High Setup Cost

High Setup Cost

Unless your infrastructure already runs Flink, setting it up for joins is often overkill. Its operational complexity, tuning requirements, and lack of a native ClickHouse connector create significant overhead for engineering teams.

Unreliable Under Load

Unreliable Under Load

KsqlDB struggles to scale joins involving large or diverse datasets, often resulting in overloaded memory and state stores.

The GlassFlow Difference

Deduplication with one click.

Select the columns as primary keys and enjoy a fully managed processing without the need to tune memory or state management.

7 days deduplication checks.

Auto detection of duplicates within 7 days after setup to ensure your data is always clean and storage is not exhausted.

Batch Ingestions built for ClickHouse.

Select from ingestion logics like auto, size based or time window based.

Joins, simplified.

Define the fields of the streams that you would like to join and GlassFlow handles execution and state management automatically.

Stateful Processing.

Built-in lightweight state store enables low-latency, in-memory deduplication and joins with context retention within the selected time window.

Managed Kafka and ClickHouse Connector.

Built and updated by GlassFlow team. Data inserts with a declared schema and schemaless.

Auto Scaling of Workers.

Our Kafka connector will trigger based on partitions new workers and make sure that execution runs efficient.

Why You Will Love It

Simple Pipeline

With GlassFlow, you remove workarounds or hacks that would have meant countless hours of setup, unpredictable maintenance, and debugging nightmares. With managed connectors and a serverless engine, it offers a clean, low-maintenance architecture that is easy to deploy and scales effortlessly.

Accurate Data Without Effort

You will go from 0 to a full setup in no time! You get connectors that retry data blocks automatically, stateful storage, and take care of late-arriving events built in. This ensures that your data ingested into ClickHouse is clean and immediately correct.

Less load for ClickHouse

Because of removing duplicates and executing joins before ingesting to ClickHouse, it reduces the need for expensive operations like FINAL or JOINs within ClickHouse. This lowers storage costs, improves query performance, and ensures ClickHouse only handles clean, optimized data instead of redundant or unprocessed streams.

Learn More About ClickHouse Optimization

hero about image
ClickHouse

Part 2: Why are duplicates happening and JOINs slowing ClickHouse?

Learn the root of the duplication and JOINs issues of Kafka to ClickHouse.

Written by Armend Avdijaj
hero about image
ClickHouse

Part 4: Can Apache Flink be the solution?

Apache Flink isn't the solution for duplications and JOINs on ClickHouse.

Written by Armend Avdijaj

Frequently asked questions

Feel free to contact us if you have any questions after reviewing our FAQs.

ClickHouse merging process is happening in the background and controlled via ClickHouse. That makes deduplication for streaming data nearly impossible without overspending and slow performance. ClickHouse recommends to reduce the usage of JOINs as it can slow down the system too much. Kafka lacks native deduplication and JOIN capabilities. It just stores events. You need a processing layer in between that handles both deduplication and stateful JOINs before data hits ClickHouse. You can learn more about the challenges from our blog post.

Yes! GlassFlow for ClickHouse is open-source under the Apache 2.0 license. You’re free to use, modify, and self-host it.

Currently, Kafka is the primary input. Support for additional streaming sources like Kinesis and Pub/Sub is on our roadmap. Reach out if you have specific needs via contact form.

As GlassFlow for ClickHouse is running completely locally on your machine, we do not have any access to your data.

GlassFlow is built for event-based data, especially JSON or structured logs. It is ideal for analytics, telemetry, and operational metrics.

We are currently working on it. If you want to have early access or a chat, feel free to reach out to us.

You can talk to the team via our contact us page and our Slack channel.

Cleaned Kafka Streams for ClickHouse

Clean Data. No maintenance. Less load for ClickHouse.

GitHub Repo