- Hands-On Deep Learning Architectures with Python
- Yuxi (Hayden) Liu Saransh Mehta
- 457字
- 2021-06-24 14:48:11
ANNs
The ANNs is built from two components: nodes and weights. Nodes play the role of a neuron while weights are the learnable parameters, which connect the neurons with each other, and control their activation paths.
So how do we make an artificial neuron or node? Consider x to be a scalar input to the neuron and w to be the scalar weight for the neuron. If you don't know what a scalar and vector are, a scalar is simply a single real number element, while a vector is a list of such elements. The artificial neuron can be represented as the following equation:

The circle represents the neuron which takes scalar x as input and outputs a after multiplying it with weight w. Here, b is called as bias. Bias is added in the equation to provide the capability of shifting the output for a specific range of inputs. The role of bias will become more clear once we go through the activation functions.
Now, suppose the neuron didn't just take in a single scalar input, but multiple inputs. The inputs can then be called a vector (say P). Then, P could be written as a set of scalar inputs p1, p2, p3, ...., pn and each input will also have a weight vector (say W = w1, w2, w3, ..., wn), which will be used to activate the neuron. The following matrices represent the P and W vectors:


So what changes do we need to make in the equation to fit these multiple inputs? Simply sum them up! This changes the basic equation a = w.x + b to the following equation:

The artificial neuron taking in multiple inputs would look like the following diagram:

But what would a neuron do alone? Although it can still output values, which can be used to take binary judgements (zero or one), we need a lot of similar neurons arranged in a parallel manner and interconnected just like in the brain in order to go beyond binary decisions. So, what would it look like? Here's the diagram:

What changes are required in the equation now? Just the dimensions of weight vector and output vector. Now we will have n x m number of weights where n is the number of inputs and m is the number of neurons. Also, we will be having a separate output from each of the neurons. The output, thus, also turns into a vector:



So far, we have learned the basic structure and mathematical equations modeling the ANNs; next we shall see another important concept called activation functions.
- 集成架構中型系統
- Clojure Data Analysis Cookbook
- Java編程全能詞典
- Hands-On Internet of Things with MQTT
- Practical Data Wrangling
- 完全掌握AutoCAD 2008中文版:綜合篇
- INSTANT Autodesk Revit 2013 Customization with .NET How-to
- 嵌入式操作系統
- 網中之我:何明升網絡社會論稿
- R Data Analysis Projects
- LMMS:A Complete Guide to Dance Music Production Beginner's Guide
- 21天學通Linux嵌入式開發
- 樂高創意機器人教程(中級 上冊 10~16歲) (青少年iCAN+創新創意實踐指導叢書)
- 基于Quartus Ⅱ的數字系統Verilog HDL設計實例詳解
- Learning OpenShift