Skip to main content
Updated Feb 10, 2026

Chapter 52: Event-Driven Architecture with Kafka

You build the kafka-events skill first, then use each lesson to test and deepen it—from EDA fundamentals to production-grade operations. Kafka 4.0+ KRaft mode (no ZooKeeper) is the default.


Goals

  • Understand event-driven architecture and Kafka’s core model (topics, partitions, consumer groups)
  • Deploy Kafka with Strimzi in KRaft mode
  • Build reliable producers/consumers with delivery guarantees and transactions
  • Integrate Kafka with FastAPI using schemas (Avro + schema registry)
  • Apply advanced patterns: Connect, CDC with Debezium, agent events, saga
  • Operate Kafka: production config, monitoring, debugging
  • Capture everything in a reusable Kafka skill

Lesson Progression

#LessonFocus
0Build Your Kafka SkillScaffold from docs
1-3EDA FoundationsWhy events, EDA patterns, Kafka mental model
4-8Kafka CoreStrimzi deploy, producers, consumers, groups
9-13Production PatternsAsync FastAPI integration, schemas, delivery semantics, transactions, reliability config
14-17Advanced PatternsConnect, Debezium CDC, agent event design, saga orchestration
18-19OperationsProduction Strimzi, monitoring/debugging
20AI-Assisted DevelopmentUse AI to generate/tune configs
21Capstone: Event-Driven NotificationsProduction pipeline

Each lesson ends with a skill reflection: test, find gaps, and improve.


Outcome & Method

You finish with a production-ready Kafka deployment, reliable producer/consumer code integrated with FastAPI, and a Kafka skill for future projects. The chapter follows the 4-Layer approach: foundations → production patterns → AI-assisted authoring → spec-driven capstone.


Prerequisites

  • Chapters 49-51: container image and Kubernetes/Helm familiarity
  • Ability to run a local Kubernetes cluster (e.g., Docker Desktop) for Strimzi
  • Implement reliable producers: acks semantics, retries, idempotent producer, error handling
  • Implement robust consumers: Consumer groups, rebalancing, offset management, lag monitoring
  • Integrate with FastAPI: Async producers/consumers, lifespan events, background tasks
  • Design event schemas: Avro with Schema Registry, schema evolution, breaking change prevention
  • Apply delivery guarantees: At-least-once, at-most-once, exactly-once semantics and trade-offs
  • Use transactions: Consume-process-produce pattern, zombie fencing, read_committed isolation
  • Build data pipelines: Kafka Connect, Debezium CDC, outbox pattern for microservices
  • Implement agent patterns: Task events, notification fanout, audit logs, saga pattern
  • Run Kafka on Kubernetes: Strimzi operator, Kafka CRDs, KRaft mode, production configuration
  • Debug production issues: Consumer lag, under-replicated partitions, rebalancing storms

Technology Choices

ComponentChoiceRationale
Kafka OperatorStrimziCNCF project, industry standard for Kafka on K8s
Kafka ModeKRaft (no ZooKeeper)Kafka 4.0+ default, simpler architecture
Python Clientconfluent-kafka-pythonBest performance, native async, Schema Registry support
SchemasAvro + Confluent Schema RegistryIndustry standard, evolution support
PlatformDocker Desktop KubernetesConsistent with Chapters 49-51
CDCDebeziumBest-in-class change data capture

What's NOT Covered

This chapter focuses on developer skills, not SRE operations:

  • Docker Compose — we use Kubernetes throughout Part 7
  • Multi-datacenter replication (MirrorMaker 2)
  • Security deep dive (SASL, SSL, ACLs) — covered at overview level only
  • Kafka Streams framework — separate advanced topic
  • Broker hardware sizing and tuning
  • ZooKeeper — removed in Kafka 4.0

Looking Ahead

This chapter teaches Kafka directly. Chapter 53 (Dapr) shows how to abstract pub/sub behind Dapr's API, making your code portable across message brokers while retaining the concepts you learned here.