官术网_书友最值得收藏!

Algorithm

The algorithm of the prototypical networks is shown here:

  1. Let's say we have the dataset, D, comprising {(x1, y1), (x2, y2), ... (xn, yn)} where x is the feature and y is the class label.
  2. Since we perform episodic training, we randomly sample n number of data points per each class from our dataset, D, and prepare our support set, S.
  3. Similarly, we select n number of data points and prepare our query set, Q.
  4. We learn the embeddings of the data points in our support set using our embedding function, f? (). The embedding function can be any feature extractor—say, a convolutional network for images and an LSTM network for text.
  5. Once we have the embeddings for each data point, we compute the prototype of each class by taking the mean embeddings of the data points under each class:
  1. Similarly, we learn the query set embeddings.
  2. We calculate the Euclidean distance, d, between query set embeddings and the class prototype.
  3. We predict the probability, p?(y = k|x), of the class of a query set by applying softmax over the distance d:
  1. We compute the loss function, J(?), as a negative log probability, J(?) = -logp?(y=k|x), and we try to minimize the loss using stochastic gradient descent.
主站蜘蛛池模板: 礼泉县| 南澳县| 荆州市| 红桥区| 轮台县| 乌海市| 扬州市| 乌拉特中旗| 西昌市| 孟村| 洪雅县| 信宜市| 荥阳市| 梅河口市| 新丰县| 闽侯县| 沽源县| 阿克苏市| 定兴县| 高陵县| 珠海市| 西乌珠穆沁旗| 泊头市| 安康市| 宁都县| 湾仔区| 齐齐哈尔市| 基隆市| 鄢陵县| 延长县| 桑日县| 泰安市| 酒泉市| 河曲县| 昌图县| 康保县| 临洮县| 庆城县| 芜湖市| 蒙自县| 阿拉善右旗|