官术网_书友最值得收藏!

Native streaming versus micro-batch

Let's examine how the stateful stream processing (as found in Apex and Flink) compares to the micro-batch based approach in Apache Spark Streaming.

Let's look at the following diagram:

On top, we see an example of processing in Spark Streaming and below we see an example in Apex in the preceding diagram. Based on its underlying "stateless" batch architecture, Spark Streaming processes a stream by piding it into small batches (micro-batches) that typically last from 500 ms to a few seconds. A new task is scheduled for every micro-batch. Once scheduled, the new task needs to be initialized. Such initialization could include opening connections to external resources, loading data that is needed for processing and so on. Overall this implies a per task overhead that limits the micro-batch frequency and leads to a latency trade-off.

In classical batch processing, tasks may last for the entire bounded input data set. Any computational state remains internal to the task and there is typically no special consideration for fault tolerance required, since whenever there is a failure, the task can restart from the beginning.

However, with unbounded data and streaming, a stateful operation like counting would need to maintain the current count and it would need to be transferred across task boundaries. As long as the state is small, this may be manageable. However, when transformations are applied to large key cardinality, the state can easily grow to a size that makes it impractical to swap in and out (cost of serialization, I/O, and so on). The correct state management is not easy to solve without underlying platform support, especially not when accuracy, consistency and fault tolerance are important.

主站蜘蛛池模板: 吉首市| 潜江市| 太保市| 天峻县| 平和县| 砚山县| 建德市| 弥勒县| 蓝田县| 陇南市| 固安县| 安多县| 阿鲁科尔沁旗| 新龙县| 班玛县| 夏河县| 富宁县| 南丹县| 名山县| 灵川县| 潮安县| 子长县| 宜宾市| 泌阳县| 平凉市| 通渭县| 新余市| 门源| 平武县| 三门县| 化隆| 巴东县| 竹北市| 聂拉木县| 宁南县| 界首市| 泊头市| 旺苍县| 博湖县| 炉霍县| 北京市|