官术网_书友最值得收藏!

Machine learning and big data

Another area that can be exploited using machine learning is big data. After the first release of Apache Hadoop, which implemented an efficient MapReduce algorithm, the amount of information managed in different business contexts grew exponentially. At the same time, the opportunity to use it for machine learning purposes arose and several applications such as mass collaborative filtering became reality.

Imagine an online store with a million users and only one thousand products. Consider a matrix where each user is associated with every product by an implicit or explicit ranking. This matrix will contain 1,000,000 x 1,000 cells, and even if the number of products is very limited, any operation performed on it will be slow and memory-consuming. Instead, using a cluster, together with parallel algorithms, such a problem disappears and operations with higher dimensionality can be carried out in a very short time.

Think about training an image classifier with a million samples. A single instance needs to iterate several times, processing small batches of pictures. Even if this problem can be performed using a streaming approach (with a limited amount of memory), it's not surprising to wait even for a few days before the model begins to perform well. Adopting a big data approach instead, it's possible to asynchronously train several local models, periodically share the updates, and re-synchronize them all with a master model. This technique has also been exploited to solve some reinforcement learning problems, where many agents (often managed by different threads) played the same game, providing their periodical contribute to a global intelligence.

Not every machine learning problem is suitable for big data, and not all big datasets are really useful when training models. However, their conjunction in particular situations can drive to extraordinary results by removing many limitations that often affect smaller scenarios.

In the chapter dedicated to recommendation systems, we're going to discuss how to implement collaborative filtering using Apache Spark. The same framework will be also adopted for an example of Naive Bayes classification.

If you want to know more about the whole Hadoop ecosystem, visit http://hadoop.apache.org. Apache Mahout ( http://mahout.apache.org) is a dedicated machine learning framework and Spark ( http://spark.apache.org), one the fastest computational engines, has a module called MLib that implements many common algorithms that benefit from parallel processing.
主站蜘蛛池模板: 宾川县| 新乡县| 宣武区| 杭锦旗| 开鲁县| 鄂尔多斯市| 山丹县| 门头沟区| 温州市| 建平县| 县级市| 太仓市| 湘潭市| 东安县| 乌恰县| 博客| 迁西县| 台中市| 潼南县| 林芝县| 台北市| 施甸县| 调兵山市| 浮山县| 三穗县| 康平县| 绥德县| 马公市| 长子县| 尼勒克县| 湖北省| 子长县| 宁海县| 太湖县| 南漳县| 布尔津县| 金堂县| 开封市| 津南区| 吴旗县| 万盛区|