官术网_书友最值得收藏!

Chapter 4. Unsupervised Learning

Labeling a set of observations for classification or regression can be a daunting task, especially in the case of a large features set. In some cases, labeled observations are either unavailable or not possible to create. In an attempt to extract some hidden associations or structures from observations, the data scientist relies on unsupervised learning techniques to detect patterns or similarity in data.

The goal of unsupervised learning is to discover patterns of regularities and irregularities in a set of observations. These techniques are also applied in reducing the solution or features space.

There are numerous unsupervised algorithms; some are more appropriate to handle dependent features, while others generate affinity groups in the case of hidden features [4:1]. In this chapter, you will learn three of the most common unsupervised learning algorithms:

  • K-means: Clustering observed features
  • Expectation-Maximization (EM): Clustering observed and latent features
  • Function approximation

Any of these algorithms can be applied to technical analysis or fundamental analysis. Fundamental analyses of financial ratios and technical analyses of price movements are described in the Technical analysis section under Finances 101 in the Appendix . The K-means algorithm is fully implemented in Scala, while the EM and principal components analyses leverage the Apache commons math library.

The chapter concludes with a brief overview of dimension reduction techniques for non-linear models.

主站蜘蛛池模板: 上思县| 荔波县| 延庆县| 延长县| 富裕县| 务川| 尉犁县| 安徽省| 东平县| 淮阳县| 宁海县| 调兵山市| 鸡西市| 吉林市| 玉溪市| 松滋市| 舞钢市| 辉南县| 五峰| 宜良县| 沙坪坝区| 吴川市| 固阳县| 天水市| 松原市| 团风县| 银川市| 桓台县| 出国| 沂南县| 玉林市| 个旧市| 双柏县| 治多县| 富川| 连云港市| 镇平县| 蒲江县| 祁连县| 巴塘县| 平安县|