Publish-Subscribe Event Broadcast
One event, many independent consumers. Publishers don't know who's listening; subscribers don't know who else is. Decouples producers from consumers so each can evolve independently.
On this page
Visual Flow
Rendering diagram…
When to Use This Pattern
Use pub/sub when an event has multiple interested consumers that can process it independently:
user.signed_up— email, analytics, CRM, provisioningorder.placed— inventory, billing, shipping, notificationdocument.uploaded— virus scan, OCR, archive, notify ownerdeployment.completed— status dashboard, Slack, on-call
If you're adding if (thing) doA(); if (thing) doB(); if (thing) doC(); in the same function, you want pub/sub.
How It Works
A publisher emits events to a topic without knowing who's listening. Subscribers register interest in the topic and receive every event independently. Common backends: Kafka, RabbitMQ, AWS SNS/SQS, Google Pub/Sub, Redis Streams.
Each subscriber processes events in its own isolated lane. One subscriber's failure doesn't block the others. One subscriber's slow processing doesn't slow the publisher.
Events should describe what happened, not what to do. user.signed_up is good. send_welcome_email is not — it's a command dressed as an event, and it violates the decoupling.
Implementation Guide
Step 1: Design the event shape first
Version the schema (user.signed_up.v1) and keep events self-describing. A subscriber should be able to handle an event without calling back to the source system.
Step 2: Pick at-least-once semantics
Every subscriber will see every event at least once. Duplicates are normal. Make subscribers idempotent using the event ID.
Step 3: Give each subscriber its own queue
Don't have subscribers share a queue. Fan out from the topic into per-subscriber queues so slow consumers don't block fast ones and failures are isolated.
Step 4: Handle schema evolution carefully
Adding a new optional field: safe. Removing a field: breaking. Renaming: breaking. Publish v1 and v2 side-by-side for a migration window; retire v1 when no subscribers are consuming it.
Step 5: Monitor subscriber lag
Every subscriber should emit "lag" — how far behind the latest event it is. A subscriber that's been 10 minutes behind for an hour is silently broken.
Tips & Best Practices
- Events are facts, not requests. Past tense.
invoice.approved, notapprove_invoice. - Include enough context to be self-sufficient. If every subscriber has to call back for details, the coupling is still there.
- Don't chain events. Subscribers that publish new events in response are fine. Subscribers that wait for their subscribers to finish are doing RPC badly.
- Keep subscribers small. One thing per subscriber beats a god subscriber that handles 17 use cases.
- Test replay. Can you re-emit the last hour of events into a dev topic? If not, you can't recover from bugs.
Related patterns
API Polling with Change Detection
Periodically check an external system for changes and trigger workflows when new or updated records are detected. The reliable alternative when webhooks aren't available.
Change Data Capture Stream
Stream row-level changes out of a database in near real-time using the transaction log. No polling, no app changes — downstream systems get inserts, updates, and deletes as they happen.
Reverse ETL
Push modelled data from your warehouse back into the SaaS tools that business teams use every day — CRM, marketing, support — so they can act on analytics without a BI detour.