- Machine Learning with Swift
- Alexander Sosnovshchenko
- 505字
- 2021-06-24 18:54:48
The motivation behind ML
Let's start with an analogy. There are two ways of learning an unfamiliar language:
- Learning the language rules by heart, using textbooks, dictionaries, and so on. That's how college students usually do it.
- Observing live language: by communicating with native speakers, reading books, and watching movies. That's how children do it.
In both cases, you build in your mind the language model, or, as some prefer to say, develop a sense of language.
In the first case, you are trying to build a logical system based on rules. In this case, you will encounter many problems: the exceptions to the rule, different dialects, borrowing from other languages, idioms, and lots more. Someone else, not you, derived and described for you the rules and structure of the language.
In the second case, you derive the same rules from the available data. You may not even be aware of the existence of these rules, but gradually adjust yourself to the hidden structure and understand the laws. You use your special brain cells called mirror neurons, trying to mimic native speakers. This ability is honed by millions of years of evolution. After some time, when facing the wrong word usage, you just feel that something is wrong but you can't tell immediately what exactly.
In any case, the next step is to apply the resulting language model in the real world. Results may differ. In the first case, you will experience difficulty every time you find the missing hyphen or comma, but may be able to get a job as a proofreader at a publishing house. In the second case, everything will depend on the quality, diversity, and amount of the data on which you were trained. Just imagine a person in the center of New York who studied English through Shakespeare. Would he be able to have a normal conversation with people around him?
Now we'll put the computer in place of the person in our example. Two approaches, in this case, represent the two programming techniques. The first one corresponds to writing ad hoc algorithms consisting of conditions, cycles, and so on, by which a programmer expresses rules and structures. The second one represents ML , in which case the computer itself identifies the underlying structure and rules based on the available data.
The analogy is deeper than it seems at first glance. For many tasks, building the algorithms directly is impossibly hard because of the variability in the real world. It may require the work of experts in the domain, who must describe all rules and edge cases explicitly. Resulting models can be fragile and rigid. On the other hand, this same task can be solved by allowing computers to figure out the rules on their own from a reasonable amount of data. An example of such a task is face recognition. It's virtually impossible to formalize face recognition in terms of conventional imperative algorithms and data structures. Only recently, the task was successfully solved with the help of ML .
- 筆記本電腦使用、維護與故障排除實戰
- Augmented Reality with Kinect
- 計算機組裝與系統配置
- BeagleBone By Example
- Getting Started with Qt 5
- Practical Machine Learning with R
- Building 3D Models with modo 701
- Creating Flat Design Websites
- Intel Edison智能硬件開發指南:基于Yocto Project
- 龍芯自主可信計算及應用
- Mastering Quantum Computing with IBM QX
- 微服務實戰
- Zabbix 4 Network Monitoring
- Service Mesh微服務架構設計
- Hands-On One-shot Learning with Python