Kafka is a Log, Not a Queue
- Shushil Anand
- Jul 17
- 2 min read
Updated: Jul 22

If you've worked with data streaming even briefly, you've probably heard of Kafka. It's a juggernaut in the event streaming world — and with good reason. We use it heavily in our company to power our event-driven architecture, and it's been a game-changer in how our systems communicate.
Event-driven architecture, simply put, is about building systems that react to events instead of constantly polling or relying on tightly coupled integrations. And Kafka? It’s the backbone — the central nervous system — where all these events get published, subscribed to, and processed.
But wait... isn’t Kafka just a fancy queue?
That’s one of the most common misconceptions I’ve come across — even among experienced devs.
In the Kafka 101 video at timestamp 1:38, it’s brilliantly explained:
"Kafka is a log, not a queue."
Unlike a traditional message queue (where messages are removed once consumed), Kafka retains messages in a log — in order — for a configurable amount of time. This small shift in mindset opens up massive design possibilities:
Multiple consumers can process the same event independently
Late consumers can replay messages
You can implement event sourcing, audit trails, and stream reprocessing without exotic hacks
A quick real-world story
Just last week, a teammate and I were debugging an issue:
Teammate: "Hey, I think we lost some messages... the consumer processed them and they’re gone. We’ll need to re-publish them."
Me: "Wait — you’re thinking of Kafka like a queue. But it’s a log. Unless your topic retention is super short or compaction kicked in, the messages are still there."
Teammate: "Wait, really? So I can just reconsume them?"
Me: "Exactly. Reset your consumer group offset or spin up another consumer — Kafka’s still holding onto them."
And just like that, problem solved.
Kafka is just awesome
Once you stop thinking of Kafka as just a queue and start embracing it as a distributed, immutable, append-only log, your architecture possibilities expand dramatically.
And while setting up vanilla Kafka is doable, managing it in production can get tricky. That’s where Confluent Kafka shines — it makes things like schema registry, connectors, ACLs, and monitoring incredibly smooth to set up and manage. We use Confluent to connect Kafka with MongoDB, S3, and other systems with minimal effort.
TL;DR: Kafka is powerful — not because it replaces queues, but because it transcends them.
So next time you think of Kafka, think like a log. Not like a queue.
Yep, new consumer group is often the quickest fix - just make sure to set auto.offset.reset=latest if you don't want to replay everything from the beginning!😀 interesting One