- Hands-On Generative Adversarial Networks with Keras
- Rafael Valle
- 100字
- 2021-06-24 14:33:51
ReLU
The ReLU non-linearity is a piecewise linear function with a non-linearity introduced by rectification. Unlike the sigmoid and Tanh non-linearities that have continuous gradients, the gradients of ReLU have two values only: 0 for values smaller than 0, and 1 for values larger than 0. Hence, the gradients of ReLU are sparse. Although the gradient of ReLU at 0 is undefined, common practice sets it to 0. There are variations to the ReLU non-linearity including the ELU and the Leaky RELU. Compared to sigmoid and Tanh, the derivative of ReLU is faster to compute and induces sparsity in models:

推薦閱讀
- ABB工業(yè)機器人編程全集
- 平面設(shè)計初步
- Introduction to DevOps with Kubernetes
- 樂高機器人EV3設(shè)計指南:創(chuàng)造者的搭建邏輯
- 極簡AI入門:一本書讀懂人工智能思維與應(yīng)用
- 自動檢測與轉(zhuǎn)換技術(shù)
- 工業(yè)機器人入門實用教程(KUKA機器人)
- 水晶石精粹:3ds max & ZBrush三維數(shù)字靜幀藝術(shù)
- 網(wǎng)絡(luò)綜合布線設(shè)計與施工技術(shù)
- Implementing Splunk 7(Third Edition)
- 數(shù)據(jù)掘金
- Kubernetes for Serverless Applications
- HTML5 Canvas Cookbook
- 所羅門的密碼
- 運動控制系統(tǒng)(第2版)