- Hands-On Meta Learning with Python
- Sudharsan Ravichandiran
- 219字
- 2021-07-02 14:29:21
Algorithm
The algorithm of the prototypical networks is shown here:
- Let's say we have the dataset, D, comprising {(x1, y1), (x2, y2), ... (xn, yn)} where x is the feature and y is the class label.
- Since we perform episodic training, we randomly sample n number of data points per each class from our dataset, D, and prepare our support set, S.
- Similarly, we select n number of data points and prepare our query set, Q.
- We learn the embeddings of the data points in our support set using our embedding function, f? (). The embedding function can be any feature extractor—say, a convolutional network for images and an LSTM network for text.
- Once we have the embeddings for each data point, we compute the prototype of each class by taking the mean embeddings of the data points under each class:

- Similarly, we learn the query set embeddings.
- We calculate the Euclidean distance, d, between query set embeddings and the class prototype.
- We predict the probability, p?(y = k|x), of the class of a query set by applying softmax over the distance d:

- We compute the loss function, J(?), as a negative log probability, J(?) = -logp?(y=k|x), and we try to minimize the loss using stochastic gradient descent.
推薦閱讀
- 漫話大數據
- Hands-On Machine Learning with Microsoft Excel 2019
- 大話Oracle Grid:云時代的RAC
- WS-BPEL 2.0 Beginner's Guide
- 大數據營銷:如何讓營銷更具吸引力
- INSTANT Cytoscape Complex Network Analysis How-to
- SQL Server 2012數據庫管理教程
- HikariCP連接池實戰
- 數字IC設計入門(微課視頻版)
- MySQL數據庫技術與應用
- Oracle高性能SQL引擎剖析:SQL優化與調優機制詳解
- 大數據隱私保護技術與治理機制研究
- 數字化轉型方法論:落地路徑與數據中臺
- 精通Neo4j
- 數據庫基礎與應用