- Building Data Streaming Applications with Apache Kafka
- Manish Kumar Chanchal Singh
- 396字
- 2022-07-12 10:38:15
Common messaging publishing patterns
Applications may have different requirements of producer--a producer that does not care about acknowledgement for the message they have sent or a producer that cares about acknowledgement but the order of messages does not matter. We have different producer patterns that can be used for application requirement. Let's discuss them one by one:
- Fire-and-forget: In this pattern, producers only care about sending messages to Kafka queues. They really do not wait for any success or failure response from Kafka. Kafka is a highly available system and most of the time, messages would be delivered successfully. However, there is some risk of message loss in this pattern. This kind of pattern is useful when latency has to be minimized to the lowest level possible and one or two lost messages does not affect the overall system functionality. To use the fire and forget model with Kafka, you have to set producer acks config to 0. The following image represents the Kafka-based fire and forget model:
Kafka producer fire and forget model
- One message transfers: In this pattern, producer sends one message at a time. It can do so in synchronous or asynchronous mode. In synchronous mode, producer sends the message and waits for a success or failure response before retrying the message or throwing the exception. In asynchronous mode, producer sends the message and receives the success or failure response as a callback function. The following image indicates this model. This kind of pattern is used for highly reliable systems where guaranteed delivery is the requirement. In this model, producer thread waits for response from Kafka. However, this does not mean that you cannot send multiple messages at a time. You can achieve that using multithreaded producer applications.

Kafka producer one message transfer model
- Batching: In this pattern, producers send multiple records to the same partition in a batch. The amount of memory required by a batch and wait time before sending the batch to Kafka is controlled by producer configuration parameters. Batching improves performance with bigger network packets and disk operations of larger datasets in a sequential manner. Batching negates the efficiency issues with respect to random reads and writes on disks. All the data in one batch would be written in one sequential fashion on hard drives. The following image indicates the batching message model:

Kafka producer batching message model
推薦閱讀
- 大學計算機基礎(第二版)
- Instant Node Package Manager
- Cocos2d Cross-Platform Game Development Cookbook(Second Edition)
- 小程序實戰視頻課:微信小程序開發全案精講
- Learning ArcGIS Pro 2
- Mastering Unity Shaders and Effects
- UML 基礎與 Rose 建模案例(第3版)
- QPanda量子計算編程
- UI動效設計從入門到精通
- Professional JavaScript
- Elastix Unified Communications Server Cookbook
- Manage Your SAP Projects with SAP Activate
- Python AI游戲編程入門:基于Pygame和PyTorch
- AngularJS by Example
- 3ds Max瘋狂設計學院