官术网_书友最值得收藏!

How it works...

When implementing a stream processor function, we often need more information than is available in the current event object. It is a best practice when publishing events to include all the relevant data that is available in the publishing context so that each event represents a micro snapshot of the system at the time of publishing. When this data is not enough, we need to retrieve more data; however, in cloud-native systems, we strive to eliminate all synchronous inter-service communication because it reduces the autonomy of the services. Instead, we create a micro event store that is tailored to the needs of the specific service.

First, we implement a listener function and filter for the desired events from the stream. Each event is stored in a DynamoDB table. You can store the entire event or just the information that is needed. When storing these events, we need to collate related events by carefully defining the HASH and RANGE keys. For example, we might want to collate all events for a specific domain object ID or all events from a specific user ID. In this example, we use event.partitionKey as the hash key, but you can calculate the hash key from any of the available data. For the range key, we need a value that is unique within the hash key. The event.id is a good choice if it is implemented with a V1 UUID because they are time-based. The Kinesis sequence number is another good choice. The event.timestamp is another alternative, but there could be a potential that events are created at the exact same time within a hash key.

The trigger function, which is attached to the DynamoDB stream, takes over after the listener has saved an event. The trigger calls getMicroEventStore to retrieve the micro event store based on the hash key calculated for the current event. At this point, the stream processor has all the relevant data available in memory. The events in the micro event store are in historical order, based on the value used for the range key. The stream processor can use this data however it sees fit to implement its business logic.

Use the DynamoDB TTL feature to keep the micro event store from growing unbounded.
主站蜘蛛池模板: 博爱县| 阿坝县| 青浦区| 随州市| 盈江县| 古交市| 玉龙| 龙胜| 榆社县| 民县| 德格县| 镇平县| 双峰县| 股票| 龙山县| 星子县| 舟曲县| 涡阳县| 五大连池市| 盱眙县| 襄垣县| 营山县| 奉新县| 长乐市| 阿图什市| 泾川县| 北流市| 黔南| 祁阳县| 洛扎县| 华容县| 镇远县| 德令哈市| 志丹县| 衢州市| 涟水县| 双辽市| 武邑县| 綦江县| 娄烦县| 松原市|