官术网_书友最值得收藏!

NRT – high-level system view

The previous section of this chapter is dedicated to providing you with an understanding of the basic building blocks of an NRT application and its logical overview. The next step is to understand the functional and systems view of the NRT architectural framework. The following figure clearly outlines the various architectural blocks and cross cutting concerns:

So, if I get to describe the system as a horizontal scale from left to right, the process starts with data ingestion and transformation in near real-time using low-latency components. The transformed data is passed on to the next logical unit that actually performs highly optimized and parallel operations on the data; this unit is actually the near real-time processing engine. Once the data has been aggregated and correlated and actionable insights have been derived, it is passed on to the presenting layer, which along with real-time dash boarding and visualization, may have a persistence component that retains the data for long term deep analytics.

The cross cutting concerns that exist across all the components of the NRT framework as depicted in the previous figure are:

  • Security
  • System management
  • Data integrity and management

Next, we are going to get you acquainted with four basic streaming patterns, so you are acquainted with the common flavors that streaming use cases pose and their optimal solutions (in later sections):

  • Stream ingestion: Here, all we are expected to do is to persist the events to the stable storage, such as HDFS, HBase, Solr, and so on. So all we need are low-latency stream collection, transformation, and persistence components.
  • Near real-time (NRT) processing: This application design allows for an external context and addresses complex use cases such as anomaly or fraud detection. It requires filtering, alerting, de-duplication, and transformation of events based on specific sophisticated business logic. All these operations are required to be performed at extremely low latency.
  • NRT event partitioned processing: This is very close to NRT processing, but with a variation that helps it deriving benefits from partitioning the data, to quote a few instances, it is like storing more relevant external information in memory. This pattern also operates at extremely low latencies.
  • NRT and complex models/machine learning: This one mostly requires us to execute very complex models/operations over a sliding window of time over the set of events in the stream. They are highly complex operations, requiring micro batching of data and operate over very low latencies.
主站蜘蛛池模板: 香港 | 华坪县| 盐亭县| 麻城市| 新巴尔虎右旗| 秦安县| 宁海县| 凭祥市| 富平县| 军事| 镇赉县| 瑞昌市| 祁阳县| 措勤县| 朝阳县| 巴东县| 双柏县| 永宁县| 隆德县| 哈密市| 阿图什市| 澜沧| 隆林| 习水县| 扶绥县| 武平县| 尉氏县| 靖宇县| 嘉定区| 辽宁省| 奎屯市| 澄江县| 缙云县| 化隆| 云南省| 余姚市| 黑水县| 仙居县| 个旧市| 佳木斯市| 扶风县|