unstructured

Форк
0
359 строк · 13.4 Кб
1
[
2
  {
3
    "element_id": "1fced17e7fb29d9a55193a3c33b57446",
4
    "metadata": {
5
      "data_source": {
6
        "date_modified": "2023-10-17T23:20:41+00:00",
7
        "record_locator": {
8
          "protocol": "s3",
9
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
10
        },
11
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
12
        "version": "322346180051831626890059520864532632042"
13
      },
14
      "filetype": "application/pdf",
15
      "languages": [
16
        "eng"
17
      ],
18
      "page_number": 1
19
    },
20
    "text": "output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.",
21
    "type": "NarrativeText"
22
  },
23
  {
24
    "element_id": "2034a880526bd0e8273295f7d63a2286",
25
    "metadata": {
26
      "data_source": {
27
        "date_modified": "2023-10-17T23:20:41+00:00",
28
        "record_locator": {
29
          "protocol": "s3",
30
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
31
        },
32
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
33
        "version": "322346180051831626890059520864532632042"
34
      },
35
      "filetype": "application/pdf",
36
      "languages": [
37
        "eng"
38
      ],
39
      "page_number": 1
40
    },
41
    "text": "Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.",
42
    "type": "NarrativeText"
43
  },
44
  {
45
    "element_id": "27b36f031306ed6ef5cf87c24b66bd0c",
46
    "metadata": {
47
      "data_source": {
48
        "date_modified": "2023-10-17T23:20:41+00:00",
49
        "record_locator": {
50
          "protocol": "s3",
51
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
52
        },
53
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
54
        "version": "322346180051831626890059520864532632042"
55
      },
56
      "filetype": "application/pdf",
57
      "languages": [
58
        "eng"
59
      ],
60
      "page_number": 1
61
    },
62
    "text": "MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O where headi = Attention(QW Q i , KW K i , V W V i )",
63
    "type": "Formula"
64
  },
65
  {
66
    "element_id": "b64b0c84c1b06a2d8249079dd71405d8",
67
    "metadata": {
68
      "data_source": {
69
        "date_modified": "2023-10-17T23:20:41+00:00",
70
        "record_locator": {
71
          "protocol": "s3",
72
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
73
        },
74
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
75
        "version": "322346180051831626890059520864532632042"
76
      },
77
      "filetype": "application/pdf",
78
      "languages": [
79
        "eng"
80
      ],
81
      "page_number": 1
82
    },
83
    "text": "Where the projections are parameter matrices W Q and W O ∈ Rhdv×dmodel. i ∈ Rdmodel×dk , W K i ∈ Rdmodel×dk , W V i ∈ Rdmodel×dv",
84
    "type": "NarrativeText"
85
  },
86
  {
87
    "element_id": "640df3a8e4d5fae30486497226b5c9b8",
88
    "metadata": {
89
      "data_source": {
90
        "date_modified": "2023-10-17T23:20:41+00:00",
91
        "record_locator": {
92
          "protocol": "s3",
93
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
94
        },
95
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
96
        "version": "322346180051831626890059520864532632042"
97
      },
98
      "filetype": "application/pdf",
99
      "languages": [
100
        "eng"
101
      ],
102
      "page_number": 1
103
    },
104
    "text": "In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.",
105
    "type": "NarrativeText"
106
  },
107
  {
108
    "element_id": "6cae52b99feb7821d42d1e968612c58e",
109
    "metadata": {
110
      "data_source": {
111
        "date_modified": "2023-10-17T23:20:41+00:00",
112
        "record_locator": {
113
          "protocol": "s3",
114
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
115
        },
116
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
117
        "version": "322346180051831626890059520864532632042"
118
      },
119
      "filetype": "application/pdf",
120
      "languages": [
121
        "eng"
122
      ],
123
      "page_number": 1
124
    },
125
    "text": "3.2.3 Applications of Attention in our Model",
126
    "type": "Title"
127
  },
128
  {
129
    "element_id": "4750f5c635c72f7add8147f05e46a812",
130
    "metadata": {
131
      "data_source": {
132
        "date_modified": "2023-10-17T23:20:41+00:00",
133
        "record_locator": {
134
          "protocol": "s3",
135
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
136
        },
137
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
138
        "version": "322346180051831626890059520864532632042"
139
      },
140
      "filetype": "application/pdf",
141
      "languages": [
142
        "eng"
143
      ],
144
      "page_number": 1
145
    },
146
    "text": "The Transformer uses multi-head attention in three different ways:",
147
    "type": "NarrativeText"
148
  },
149
  {
150
    "element_id": "5c4f25461422e00e53ecfd09bbe78dfa",
151
    "metadata": {
152
      "data_source": {
153
        "date_modified": "2023-10-17T23:20:41+00:00",
154
        "record_locator": {
155
          "protocol": "s3",
156
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
157
        },
158
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
159
        "version": "322346180051831626890059520864532632042"
160
      },
161
      "filetype": "application/pdf",
162
      "languages": [
163
        "eng"
164
      ],
165
      "page_number": 1
166
    },
167
    "text": "• In \"encoder-decoder attention\" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9].",
168
    "type": "ListItem"
169
  },
170
  {
171
    "element_id": "e9caf0a0f1f415cf9842ac607eaff0ff",
172
    "metadata": {
173
      "data_source": {
174
        "date_modified": "2023-10-17T23:20:41+00:00",
175
        "record_locator": {
176
          "protocol": "s3",
177
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
178
        },
179
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
180
        "version": "322346180051831626890059520864532632042"
181
      },
182
      "filetype": "application/pdf",
183
      "languages": [
184
        "eng"
185
      ],
186
      "page_number": 1
187
    },
188
    "text": "• The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.",
189
    "type": "ListItem"
190
  },
191
  {
192
    "element_id": "b59bd0cd0e3c6ed28f3567ec55e14bc7",
193
    "metadata": {
194
      "data_source": {
195
        "date_modified": "2023-10-17T23:20:41+00:00",
196
        "record_locator": {
197
          "protocol": "s3",
198
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
199
        },
200
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
201
        "version": "322346180051831626890059520864532632042"
202
      },
203
      "filetype": "application/pdf",
204
      "languages": [
205
        "eng"
206
      ],
207
      "page_number": 1
208
    },
209
    "text": "• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.",
210
    "type": "ListItem"
211
  },
212
  {
213
    "element_id": "9d4113060fbfb7435932cad61b0e922a",
214
    "metadata": {
215
      "data_source": {
216
        "date_modified": "2023-10-17T23:20:41+00:00",
217
        "record_locator": {
218
          "protocol": "s3",
219
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
220
        },
221
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
222
        "version": "322346180051831626890059520864532632042"
223
      },
224
      "filetype": "application/pdf",
225
      "languages": [
226
        "eng"
227
      ],
228
      "page_number": 1
229
    },
230
    "text": "3.3 Position-wise Feed-Forward Networks",
231
    "type": "Title"
232
  },
233
  {
234
    "element_id": "582ef9f4f3483f1f73bc5ec175bc8892",
235
    "metadata": {
236
      "data_source": {
237
        "date_modified": "2023-10-17T23:20:41+00:00",
238
        "record_locator": {
239
          "protocol": "s3",
240
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
241
        },
242
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
243
        "version": "322346180051831626890059520864532632042"
244
      },
245
      "filetype": "application/pdf",
246
      "languages": [
247
        "eng"
248
      ],
249
      "page_number": 1
250
    },
251
    "text": "In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.",
252
    "type": "NarrativeText"
253
  },
254
  {
255
    "element_id": "e5c3cb7f77f5a0ce57a7bc1ee967ebb9",
256
    "metadata": {
257
      "data_source": {
258
        "date_modified": "2023-10-17T23:20:41+00:00",
259
        "record_locator": {
260
          "protocol": "s3",
261
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
262
        },
263
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
264
        "version": "322346180051831626890059520864532632042"
265
      },
266
      "filetype": "application/pdf",
267
      "languages": [
268
        "eng"
269
      ],
270
      "page_number": 1
271
    },
272
    "text": "FFN(x) = max(0, xW1 + b1)W2 + b2 (2)",
273
    "type": "Formula"
274
  },
275
  {
276
    "element_id": "bf361fe9e95971d11badc9dedd8de25d",
277
    "metadata": {
278
      "data_source": {
279
        "date_modified": "2023-10-17T23:20:41+00:00",
280
        "record_locator": {
281
          "protocol": "s3",
282
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
283
        },
284
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
285
        "version": "322346180051831626890059520864532632042"
286
      },
287
      "filetype": "application/pdf",
288
      "languages": [
289
        "eng"
290
      ],
291
      "page_number": 1
292
    },
293
    "text": "While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality df f = 2048.",
294
    "type": "NarrativeText"
295
  },
296
  {
297
    "element_id": "bb81a72db74d77059c459701be1193a4",
298
    "metadata": {
299
      "data_source": {
300
        "date_modified": "2023-10-17T23:20:41+00:00",
301
        "record_locator": {
302
          "protocol": "s3",
303
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
304
        },
305
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
306
        "version": "322346180051831626890059520864532632042"
307
      },
308
      "filetype": "application/pdf",
309
      "languages": [
310
        "eng"
311
      ],
312
      "page_number": 1
313
    },
314
    "text": "3.4 Embeddings and Softmax",
315
    "type": "Title"
316
  },
317
  {
318
    "element_id": "e43d418c6a8817cfb09fcfd081bfd256",
319
    "metadata": {
320
      "data_source": {
321
        "date_modified": "2023-10-17T23:20:41+00:00",
322
        "record_locator": {
323
          "protocol": "s3",
324
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
325
        },
326
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
327
        "version": "322346180051831626890059520864532632042"
328
      },
329
      "filetype": "application/pdf",
330
      "languages": [
331
        "eng"
332
      ],
333
      "page_number": 1
334
    },
335
    "text": "Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor- mation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax dmodel. linear transformation, similar to [30]. In the embedding layers, we multiply those weights by",
336
    "type": "NarrativeText"
337
  },
338
  {
339
    "element_id": "aec400e3e65dc09b31513694bc9893b9",
340
    "metadata": {
341
      "data_source": {
342
        "date_modified": "2023-10-17T23:20:41+00:00",
343
        "record_locator": {
344
          "protocol": "s3",
345
          "remote_file_path": "utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf"
346
        },
347
        "url": "s3://utic-dev-tech-fixtures/small-pdf-set/page-with-formula.pdf",
348
        "version": "322346180051831626890059520864532632042"
349
      },
350
      "filetype": "application/pdf",
351
      "languages": [
352
        "eng"
353
      ],
354
      "page_number": 1
355
    },
356
    "text": "5",
357
    "type": "Footer"
358
  }
359
]

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.