官术网_书友最值得收藏!

Algorithm

The algorithm of the prototypical networks is shown here:

  1. Let's say we have the dataset, D, comprising {(x1, y1), (x2, y2), ... (xn, yn)} where x is the feature and y is the class label.
  2. Since we perform episodic training, we randomly sample n number of data points per each class from our dataset, D, and prepare our support set, S.
  3. Similarly, we select n number of data points and prepare our query set, Q.
  4. We learn the embeddings of the data points in our support set using our embedding function, f? (). The embedding function can be any feature extractor—say, a convolutional network for images and an LSTM network for text.
  5. Once we have the embeddings for each data point, we compute the prototype of each class by taking the mean embeddings of the data points under each class:
  1. Similarly, we learn the query set embeddings.
  2. We calculate the Euclidean distance, d, between query set embeddings and the class prototype.
  3. We predict the probability, p?(y = k|x), of the class of a query set by applying softmax over the distance d:
  1. We compute the loss function, J(?), as a negative log probability, J(?) = -logp?(y=k|x), and we try to minimize the loss using stochastic gradient descent.
主站蜘蛛池模板: 茂名市| 芒康县| 弋阳县| 渭源县| 襄樊市| 平泉县| 宁安市| 隆林| 钟山县| 凤山市| 正安县| 长寿区| 易门县| 班玛县| 武威市| 安新县| 连山| 恭城| 乌兰县| 贞丰县| 正阳县| 赤水市| 定日县| 孝感市| 讷河市| 博客| 彭山县| 永定县| 鄂托克前旗| 阿勒泰市| 沾益县| 黔东| 镇江市| 滕州市| 盐亭县| 苗栗县| 句容市| 霍邱县| 桐庐县| 永修县| 玉田县|