官术网_书友最值得收藏!

Algorithm

Now, we will better understand the Gaussian prototypical network by going through it step by step:

  1. Let's say we have a dataset, D = {(x1, y1,), (x2, y2), ... (xi, yi)}, where x is the feature and y is the label. Let's say we have a binary label, which means we have only two classes, 0 and 1. We will sample data points at random without replacement from each of the classes from our dataset, D, and create our support set, S.
  2. Similarly, we sample data points at random per class and create the query set, Q.
  3. We will pass the support set to our embedding function, f(). The embedding function will generate the embeddings for our support set, along with the covariance matrix.
  4. We calculate the inverse of the covariance matrix.
  5. We compute the prototype of each class in the support set as follows:

In this equation, is the diagonal of the inverse covariance matrix, denotes the embeddings of the support set and superscript c denotes the class.

  1. After computing the prototype of each class in the support set, we learn the embeddings for the query set, Q. Let's say x' is the embedding of the query point.
  2. We calculate the distance of the query point embeddings to the class prototypes as follows:
  1. After calculating the distance between the class prototype and query set embeddings, we predict the class of the query set as a class that has a minimum distance, as follows:
主站蜘蛛池模板: 乌拉特中旗| 长沙县| 加查县| 南川市| 武威市| 两当县| 台南市| 拉萨市| 古丈县| 仙桃市| 上饶县| 武夷山市| 镇赉县| 轮台县| 阿克陶县| 永胜县| 襄汾县| 亚东县| 杭锦后旗| 阳西县| 塔河县| 湟源县| 长子县| 青州市| 泽普县| 隆化县| 新余市| 金沙县| 焉耆| 顺义区| 大邑县| 舟山市| 江北区| 永平县| 华宁县| 大城县| 广灵县| 乌鲁木齐县| 鄢陵县| 安化县| 竹溪县|