- Mastering Machine Learning for Penetration Testing
- Chiheb Chebbi
- 385字
- 2021-06-25 21:03:03
Artificial neural networks
Artificial networks are some of the hottest applications in artificial intelligence, especially machine learning. The main aim of artificial neural networks is building models that can learn like a human mind; in other words, we try to mimic the human mind. That is why, in order to learn how to build neural network systems, we need to have a clear understanding of how a human mind actually works. The human mind is an amazing entity. The mind is composed and wired by neurons. Neurons are responsible for transferring and processing information.
We all know that the human mind can perform a lot of tasks, like hearing, seeing, tasting, and many other complicated tasks. So logically, one might think that the mind is composed of many different areas, with each area responsible for a specific task, thanks to a specific algorithm. But this is totally wrong. According to research, all of the different parts of the human mind function thanks to one algorithm, not different algorithms. This hypothesis is called the one algorithm hypothesis.
Now we know that the mind works by using one algorithm. But what is this algorithm? How is it used? How is information processed with it?
To answer the preceding questions, we need to look at the logical representation of a neuron. The artificial representation of a human neuron is called a perceptron. A perceptron is represented by the following graph:

There are many Activation Functions used. You can view them as logical gates:
- Step function: A predefined threshold value.
- Sigmoid function:
- Tanh function:
- ReLu function:
Many fully connected perceptrons comprise what we call a Multi-Layer Perceptron (MLP) network. A typical neural network contains the following:
- An input layer
- Hidden layers
- Output layers
We will discuss the term deep learning once we have more than three hidden layers. There are many types of deep learning networks used in the world:
- Convolutional neural networks (CNNs)
- Recursive neural networks (RNNs)
- Long short-term memory (LSTM)
- Shallow neural networks
- Autoencoders (AEs)
- Restricted Boltzmann machines
Don't worry; we will discuss the preceding algorithms in detail in future chapters.
To build deep learning models, we follow five steps, suggested by Dr. Jason Brownlee. The five steps are as follows:
- Network definition
- Network compiling
- Network fitting
- Network evaluation
- Prediction
- CorelDRAW X6 中文版圖形設(shè)計實戰(zhàn)從入門到精通
- Cisco OSPF命令與配置手冊
- 物聯(lián)網(wǎng)識別技術(shù)
- 網(wǎng)絡(luò)故障現(xiàn)場處理實踐(第4版)
- 新一代物聯(lián)網(wǎng)架構(gòu)技術(shù):分層算力網(wǎng)絡(luò)
- 農(nóng)產(chǎn)品物聯(lián)網(wǎng)研究與應(yīng)用
- 互聯(lián)網(wǎng)安全的40個智慧洞見:2015年中國互聯(lián)網(wǎng)安全大會文集
- 大話社交網(wǎng)絡(luò)
- 計算機(jī)網(wǎng)絡(luò)原理與應(yīng)用技術(shù)
- 數(shù)字調(diào)制解調(diào)技術(shù)的MATLAB與FPGA實現(xiàn):Altera/Verilog版(第2版)
- 工業(yè)互聯(lián)網(wǎng)創(chuàng)新實踐
- 網(wǎng)絡(luò)利他行為研究:積極心理學(xué)的視角
- 局域網(wǎng)組成實踐
- 物聯(lián)網(wǎng)基礎(chǔ)及應(yīng)用
- 物聯(lián)網(wǎng),So Easy!