官术网_书友最值得收藏!

Native streaming versus micro-batch

Let's examine how the stateful stream processing (as found in Apex and Flink) compares to the micro-batch based approach in Apache Spark Streaming.

Let's look at the following diagram:

On top, we see an example of processing in Spark Streaming and below we see an example in Apex in the preceding diagram. Based on its underlying "stateless" batch architecture, Spark Streaming processes a stream by piding it into small batches (micro-batches) that typically last from 500 ms to a few seconds. A new task is scheduled for every micro-batch. Once scheduled, the new task needs to be initialized. Such initialization could include opening connections to external resources, loading data that is needed for processing and so on. Overall this implies a per task overhead that limits the micro-batch frequency and leads to a latency trade-off.

In classical batch processing, tasks may last for the entire bounded input data set. Any computational state remains internal to the task and there is typically no special consideration for fault tolerance required, since whenever there is a failure, the task can restart from the beginning.

However, with unbounded data and streaming, a stateful operation like counting would need to maintain the current count and it would need to be transferred across task boundaries. As long as the state is small, this may be manageable. However, when transformations are applied to large key cardinality, the state can easily grow to a size that makes it impractical to swap in and out (cost of serialization, I/O, and so on). The correct state management is not easy to solve without underlying platform support, especially not when accuracy, consistency and fault tolerance are important.

主站蜘蛛池模板: 涞水县| 清河县| 尼勒克县| 舞阳县| 潜山县| 吴江市| 都江堰市| 玉山县| 桃园县| 平潭县| 巴楚县| 龙井市| 从江县| 苏州市| 内黄县| 梧州市| 尼木县| 农安县| 句容市| 高安市| 宝山区| 宁蒗| 沁水县| 泰宁县| 介休市| 闻喜县| 彭阳县| 交口县| 会宁县| 蒲城县| 漯河市| 土默特左旗| 突泉县| 巴彦淖尔市| 宁夏| 漾濞| 颍上县| 潜山县| 阿拉尔市| 库伦旗| 昌都县|