官术网_书友最值得收藏!

Lessons learned from building our first enterprise-ready data pipeline

Leveraging open source frameworks, libraries, and tools definitely helped us be more efficient in implementing our data pipeline. For example, Kafka and Spark were pretty straightforward to deploy and easy to use, and when we were stuck, we could always rely on the developer community for help by using, for example, question and answer sites, such as https://stackoverflow.com.

Using a cloud-based managed service for the sentiment analysis step, such as the IBM Watson Tone Analyzer (https://www.ibm.com/watson/services/tone-analyzer) was another positive. It allowed us to abstract out the complexity of training and deploying a model, making the whole step more reliable and certainly more accurate than if we had implemented it ourselves.

It was also super easy to integrate as we only needed to make a REST request (also known as an HTTP request, see architecture, however, we still need to know the specification for each of the APIs, which can take a long time to get right. This step is usually made simpler by using an SDK library, which is often provided for free and in most popular languages, such as Python, R, Java, and Node.js. SDK libraries provide higher level programmatic access to the service by abstracting out the code that generates the REST requests. The SDK would typically provide a class to represent the service, where each method would encapsulate a REST API while taking care of user authentication and other headers.

On the tooling side, we were very impressed with Jupyter Notebooks, which provided excellent features, such as collaboration and full interactivity (we'll cover Notebooks in more detail later on).

Not everything was smooth though, as we struggled in a few key areas:

  • Which programming language to choose for some of the key tasks, such as data enrichment and data analysis. We ended up using Scala and Python, even though there was little experience on the team, mostly because they are very popular among data scientists and also because we wanted to learn them.
  • Creating visualizations for data exploration was taking too much time. Writing a simple chart with a visualization library, such as Matplotlib or Bokeh required writing too much code. This, in turn, slowed down our need for fast experimentation.
  • Operationalizing the analytics into a real-time dashboard was way too hard to be scalable. As mentioned before, we needed to write a full-fledged standalone Node.js application that consumes data from Kafka and needed to be deployed as a cloud-foundry application (https://www.cloudfoundry.org) on the IBM Cloud. Understandably, this task required quite a long time to complete the first time, but we also found that it was difficult to update as well. Changes in the analytics that write data to Kafka needed to be synchronized with the changes on the dashboard application as well.
主站蜘蛛池模板: 高唐县| 文水县| 吉木萨尔县| 竹山县| 景洪市| 融水| 汪清县| 安多县| 屯门区| 积石山| 崇文区| 手游| 维西| 喀喇沁旗| 东港市| 徐汇区| 凤台县| 泗阳县| 遂昌县| 通辽市| 伊通| 安龙县| 神木县| 长顺县| 万年县| 曲阜市| 明水县| 彩票| 花莲县| 东乌珠穆沁旗| 象山县| 鄂温| 莲花县| 维西| 安仁县| 新源县| 凤台县| 德格县| 大洼县| 阳江市| 山阳县|