txtai

Форк
0
/
23_Tensor_workflows.ipynb 
503 строки · 17.5 Кб
1
{
2
  "nbformat": 4,
3
  "nbformat_minor": 0,
4
  "metadata": {
5
    "accelerator": "GPU",
6
    "colab": {
7
      "name": "23 - Tensor workflows",
8
      "provenance": [],
9
      "collapsed_sections": []
10
    },
11
    "kernelspec": {
12
      "display_name": "Python 3",
13
      "name": "python3"
14
    }
15
  },
16
  "cells": [
17
    {
18
      "cell_type": "markdown",
19
      "metadata": {
20
        "id": "4Pjmz-RORV8E"
21
      },
22
      "source": [
23
        "# Tensor workflows\n",
24
        "\n",
25
        "Many of the examples and use cases for txtai focus on transforming text. Makes sense as txt is even in the name! But that doesn't mean txtai only works with text.\n",
26
        "\n",
27
        "This notebook will cover examples of how to efficiently process tensors using txtai workflows."
28
      ]
29
    },
30
    {
31
      "cell_type": "markdown",
32
      "metadata": {
33
        "id": "Dk31rbYjSTYm"
34
      },
35
      "source": [
36
        "# Install dependencies\n",
37
        "\n",
38
        "Install `txtai` and all dependencies. We will install the api, pipeline and workflow optional extras packages, along with the datasets package. "
39
      ]
40
    },
41
    {
42
      "cell_type": "code",
43
      "metadata": {
44
        "id": "XMQuuun2R06J"
45
      },
46
      "source": [
47
        "%%capture\n",
48
        "!pip install git+https://github.com/neuml/txtai#egg=txtai[api,pipeline,workflow] datasets"
49
      ],
50
      "execution_count": null,
51
      "outputs": []
52
    },
53
    {
54
      "cell_type": "markdown",
55
      "metadata": {
56
        "id": "NSYrP0hjtR_E"
57
      },
58
      "source": [
59
        "# Transform large tensor arrays\n",
60
        "\n",
61
        "The first section attempts to apply a simple transform to a very large memory-mapped array (2,000,000 x 1024)."
62
      ]
63
    },
64
    {
65
      "cell_type": "code",
66
      "metadata": {
67
        "id": "BoPJIKWoTibk",
68
        "colab": {
69
          "base_uri": "https://localhost:8080/",
70
          "height": 220
71
        },
72
        "outputId": "143a6e4d-fe56-4353-e8ee-595ddfc12249"
73
      },
74
      "source": [
75
        "import numpy as np\n",
76
        "import torch\n",
77
        "\n",
78
        "# Generate large memory-mapped array\n",
79
        "rows, cols = 2000000, 1024\n",
80
        "data = np.memmap(\"data.npy\", dtype=np.float32, mode=\"w+\", shape=(rows, cols))\n",
81
        "del data\n",
82
        "\n",
83
        "# Open memory-mapped array\n",
84
        "data = np.memmap(\"data.npy\", dtype=np.float32, shape=(rows, cols))\n",
85
        "\n",
86
        "# Create tensor\n",
87
        "tensor = torch.from_numpy(data).to(\"cuda:0\")\n",
88
        "\n",
89
        "# Apply tanh transform to tensor\n",
90
        "torch.tanh(tensor).shape"
91
      ],
92
      "execution_count": null,
93
      "outputs": [
94
        {
95
          "output_type": "error",
96
          "ename": "RuntimeError",
97
          "evalue": "ignored",
98
          "traceback": [
99
            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
100
            "\u001b[0;31mRuntimeError\u001b[0m                              Traceback (most recent call last)",
101
            "\u001b[0;32m<ipython-input-2-a1fc94fedb69>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     14\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     15\u001b[0m \u001b[0;31m# Apply tanh transform to tensor\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 16\u001b[0;31m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtanh\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtensor\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
102
            "\u001b[0;31mRuntimeError\u001b[0m: CUDA out of memory. Tried to allocate 7.63 GiB (GPU 0; 11.17 GiB total capacity; 7.63 GiB already allocated; 3.04 GiB free; 7.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
103
          ]
104
        }
105
      ]
106
    },
107
    {
108
      "cell_type": "code",
109
      "metadata": {
110
        "colab": {
111
          "base_uri": "https://localhost:8080/"
112
        },
113
        "id": "O8mKzPP01d_m",
114
        "outputId": "929226a8-6948-4d17-ab70-025da2081abd"
115
      },
116
      "source": [
117
        "!ls -l --block-size=MB data.npy"
118
      ],
119
      "execution_count": null,
120
      "outputs": [
121
        {
122
          "output_type": "stream",
123
          "name": "stdout",
124
          "text": [
125
            "-rw-r--r-- 1 root root 8192MB Dec  6 23:24 data.npy\n"
126
          ]
127
        }
128
      ]
129
    },
130
    {
131
      "cell_type": "markdown",
132
      "metadata": {
133
        "id": "vuObmAJ9FaJe"
134
      },
135
      "source": [
136
        "Not surprisingly this runs out of CUDA memory. The array needs `2,000,000 * 1024 * 4 = 8GB` which exceeds the amount of GPU memory available.\n",
137
        "\n",
138
        "One of the great things about NumPy and PyTorch arrays is that they can be sliced without having to copy data. Additionally, PyTorch has methods to work directly on NumPy arrays without copying data, in other words both NumPy arrays and PyTorch arrays can share the same memory. This opens the door to efficient processing of tensor data in place. \n",
139
        "\n",
140
        "Let's try applying a simple tanh transform in batches over the array."
141
      ]
142
    },
143
    {
144
      "cell_type": "code",
145
      "metadata": {
146
        "colab": {
147
          "base_uri": "https://localhost:8080/"
148
        },
149
        "id": "ciD6unQYD-bJ",
150
        "outputId": "d3b6a0c5-aea5-451d-d3e3-60d04ef33a9e"
151
      },
152
      "source": [
153
        "def process(x):\n",
154
        "  print(x.shape)\n",
155
        "  return torch.tanh(torch.from_numpy(x).to(\"cuda:0\")).cpu().numpy()\n",
156
        "\n",
157
        "# Split into 250,000 rows per call\n",
158
        "batch = 250000\n",
159
        "count = 0\n",
160
        "for x in range(0, len(data), batch):\n",
161
        "  for row in process(data[x : x + batch]):\n",
162
        "    count += 1\n",
163
        "\n",
164
        "print(count)"
165
      ],
166
      "execution_count": null,
167
      "outputs": [
168
        {
169
          "output_type": "stream",
170
          "name": "stdout",
171
          "text": [
172
            "(250000, 1024)\n",
173
            "(250000, 1024)\n",
174
            "(250000, 1024)\n",
175
            "(250000, 1024)\n",
176
            "(250000, 1024)\n",
177
            "(250000, 1024)\n",
178
            "(250000, 1024)\n",
179
            "(250000, 1024)\n",
180
            "2000000\n"
181
          ]
182
        }
183
      ]
184
    },
185
    {
186
      "cell_type": "markdown",
187
      "metadata": {
188
        "id": "uZBzEMsRHpsi"
189
      },
190
      "source": [
191
        "Iterating over the data array and selecting slices to operate on allows the transform to complete successfully! Each `torch.from_numpy` call is building a view of a portion the existing large NumPy data array. "
192
      ]
193
    },
194
    {
195
      "cell_type": "markdown",
196
      "metadata": {
197
        "id": "Oe7X17vbHJRV"
198
      },
199
      "source": [
200
        "# Enter workflows\n",
201
        "\n",
202
        "The next section takes the same array and shows how workflows can apply transformations to tensors. "
203
      ]
204
    },
205
    {
206
      "cell_type": "code",
207
      "metadata": {
208
        "colab": {
209
          "base_uri": "https://localhost:8080/"
210
        },
211
        "id": "ymqr92kW9hxd",
212
        "outputId": "e4a4c7d1-be54-46c2-bc7c-dd849adfeb7e"
213
      },
214
      "source": [
215
        "from txtai.workflow import Task, Workflow\n",
216
        "\n",
217
        "# Create workflow with a single task calling process for each batch\n",
218
        "task = Task(process)\n",
219
        "workflow = Workflow([task], batch)\n",
220
        "\n",
221
        "# Run workflow\n",
222
        "count = 0\n",
223
        "for row in workflow(data):\n",
224
        "  count += 1\n",
225
        "\n",
226
        "print(count)"
227
      ],
228
      "execution_count": null,
229
      "outputs": [
230
        {
231
          "output_type": "stream",
232
          "name": "stdout",
233
          "text": [
234
            "(250000, 1024)\n",
235
            "(250000, 1024)\n",
236
            "(250000, 1024)\n",
237
            "(250000, 1024)\n",
238
            "(250000, 1024)\n",
239
            "(250000, 1024)\n",
240
            "(250000, 1024)\n",
241
            "(250000, 1024)\n",
242
            "2000000\n"
243
          ]
244
        }
245
      ]
246
    },
247
    {
248
      "cell_type": "markdown",
249
      "metadata": {
250
        "id": "B9qC8qUbHjfk"
251
      },
252
      "source": [
253
        "Workflows process the data in the same fashion as the code in the previous section. On top of that, workflows can handle text, images, video, audio, document, tensors and more. Workflow graphs can also be connected together to handle complex use cases."
254
      ]
255
    },
256
    {
257
      "cell_type": "markdown",
258
      "metadata": {
259
        "id": "wCRD9ERoJvsG"
260
      },
261
      "source": [
262
        "# Workflows with PyTorch models\n",
263
        "\n",
264
        "The next example applies a PyTorch model to the same data. The model applies a series of transforms and outputs a single float per row."
265
      ]
266
    },
267
    {
268
      "cell_type": "code",
269
      "metadata": {
270
        "colab": {
271
          "base_uri": "https://localhost:8080/"
272
        },
273
        "id": "N9UTfSTTIDaO",
274
        "outputId": "000c7e13-a249-43d0-f419-28ddd62e8ba1"
275
      },
276
      "source": [
277
        "from torch import nn\n",
278
        "\n",
279
        "class Model(nn.Module):\n",
280
        "    def __init__(self):\n",
281
        "        super().__init__()\n",
282
        "\n",
283
        "        self.gelu = nn.ReLU()\n",
284
        "        self.linear1 = nn.Linear(1024, 512)\n",
285
        "        self.dropout = nn.Dropout(0.5)\n",
286
        "        self.norm = nn.LayerNorm(512)\n",
287
        "        self.linear2 = nn.Linear(512, 1)\n",
288
        "\n",
289
        "    def forward(self, inputs):\n",
290
        "        outputs = self.gelu(inputs)\n",
291
        "        outputs = self.linear1(outputs)\n",
292
        "        outputs = self.dropout(outputs)\n",
293
        "        outputs = self.norm(outputs)\n",
294
        "        outputs = self.linear2(outputs)\n",
295
        "\n",
296
        "        return outputs\n",
297
        "\n",
298
        "model = Model().to(\"cuda:0\")\n",
299
        "\n",
300
        "def process(x):\n",
301
        "  with torch.no_grad():\n",
302
        "    outputs = model(torch.from_numpy(x).to(\"cuda:0\")).cpu().numpy()\n",
303
        "    print(outputs.shape)\n",
304
        "    return outputs\n",
305
        "\n",
306
        "# Create workflow with a single task calling model for each batch\n",
307
        "task = Task(process)\n",
308
        "workflow = Workflow([task], batch)\n",
309
        "\n",
310
        "# Run workflow\n",
311
        "count = 0\n",
312
        "for row in workflow(data):\n",
313
        "  count += 1\n",
314
        "\n",
315
        "print(count)"
316
      ],
317
      "execution_count": null,
318
      "outputs": [
319
        {
320
          "output_type": "stream",
321
          "name": "stdout",
322
          "text": [
323
            "(250000, 1)\n",
324
            "(250000, 1)\n",
325
            "(250000, 1)\n",
326
            "(250000, 1)\n",
327
            "(250000, 1)\n",
328
            "(250000, 1)\n",
329
            "(250000, 1)\n",
330
            "(250000, 1)\n",
331
            "2000000\n"
332
          ]
333
        }
334
      ]
335
    },
336
    {
337
      "cell_type": "markdown",
338
      "metadata": {
339
        "id": "Q1ivX4eBuU8T"
340
      },
341
      "source": [
342
        "Once again the data can be processed in batches using workflows, even with a more complex model. Let's try a more interesting example."
343
      ]
344
    },
345
    {
346
      "cell_type": "markdown",
347
      "metadata": {
348
        "id": "KoSB1mKzUnb0"
349
      },
350
      "source": [
351
        "# Workflows in parallel\n",
352
        "\n",
353
        "Workflows consist of a series of tasks. Each task can output one to many outputs per input element. Multi-output tasks have options available to [merge the data](https://neuml.github.io/txtai/workflow/task/#multi-action-task-merges) for downstream tasks.\n",
354
        "\n",
355
        "The following example builds a workflow with a task having three separate actions. Each action takes text as an input an applies a sentiment classifier. This is followed by a task that merges the three outputs for each row using a mean transform. Essentially, this workflow builds a weighted sentiment classifier using the outputs of three models. "
356
      ]
357
    },
358
    {
359
      "cell_type": "code",
360
      "metadata": {
361
        "id": "JlCdVgo_LXOl"
362
      },
363
      "source": [
364
        "import time\n",
365
        "\n",
366
        "from datasets import load_dataset\n",
367
        "from transformers import AutoTokenizer, AutoModelForSequenceClassification\n",
368
        "\n",
369
        "class Tokens:\n",
370
        "    def __init__(self, texts):\n",
371
        "        tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\n",
372
        "        tokens = tokenizer(texts, padding=True, return_tensors=\"pt\").to(\"cuda:0\")\n",
373
        "\n",
374
        "        self.inputs, self.attention = tokens[\"input_ids\"], tokens[\"attention_mask\"]\n",
375
        "\n",
376
        "    def __len__(self):\n",
377
        "        return len(self.inputs)\n",
378
        "\n",
379
        "    def __getitem__(self, value):\n",
380
        "        return (self.inputs[value], self.attention[value])\n",
381
        "\n",
382
        "class Classify:\n",
383
        "    def __init__(self, model):\n",
384
        "        self.model = model\n",
385
        "\n",
386
        "    def __call__(self, tokens):\n",
387
        "        with torch.no_grad():\n",
388
        "            inputs, attention = tokens\n",
389
        "            outputs = self.model(input_ids=inputs, attention_mask=attention)\n",
390
        "            outputs = outputs[\"logits\"]\n",
391
        "\n",
392
        "        return outputs\n",
393
        "\n",
394
        "# Load reviews from the rotten tomatoes dataset\n",
395
        "ds = load_dataset(\"rotten_tomatoes\")\n",
396
        "texts = ds[\"train\"][\"text\"]\n",
397
        "\n",
398
        "tokens = Tokens(texts)\n",
399
        "\n",
400
        "model1 = AutoModelForSequenceClassification.from_pretrained(\"M-FAC/bert-tiny-finetuned-sst2\")\n",
401
        "model1 = model1.to(\"cuda:0\")\n",
402
        "\n",
403
        "model2 = AutoModelForSequenceClassification.from_pretrained(\"howey/electra-base-sst2\")\n",
404
        "model2 = model2.to(\"cuda:0\")\n",
405
        "\n",
406
        "model3 = AutoModelForSequenceClassification.from_pretrained(\"philschmid/MiniLM-L6-H384-uncased-sst2\")\n",
407
        "model3 = model3.to(\"cuda:0\")\n",
408
        "\n",
409
        "task1 = Task([Classify(model1), Classify(model2), Classify(model3)])\n",
410
        "task2 = Task([lambda x: torch.sigmoid(x).mean(axis=1).cpu().numpy()])\n",
411
        "\n",
412
        "workflow = Workflow([task1, task2], 250)\n",
413
        "\n",
414
        "start = time.time()\n",
415
        "for x in workflow(tokens):\n",
416
        "  pass\n",
417
        "\n",
418
        "print(f\"Took {time.time() - start} seconds\")"
419
      ],
420
      "execution_count": null,
421
      "outputs": [
422
        {
423
          "output_type": "stream",
424
          "name": "stderr",
425
          "text": [
426
            "Using custom data configuration default\n",
427
            "Reusing dataset rotten_tomatoes_movie_review (/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46)\n"
428
          ]
429
        },
430
        {
431
          "output_type": "stream",
432
          "name": "stdout",
433
          "text": [
434
            "Took 84.73194456100464 seconds\n"
435
          ]
436
        }
437
      ]
438
    },
439
    {
440
      "cell_type": "markdown",
441
      "metadata": {
442
        "id": "eBQzLrtAVUtB"
443
      },
444
      "source": [
445
        "Note that while the task actions are parallel, that doesn't necessarily mean the operations are concurrent. In the case above, the actions are are executed sequentially.\n",
446
        "\n",
447
        "Workflows have an additional option to run task actions concurrently. The two supported modes are \"thread\" and \"process\". I/O bound actions will do better with multithreading and CPU bound actions will do better with multiprocessing. More can be read in the [txtai documentation](https://neuml.github.io/txtai/workflow/task/#multi-action-task-concurrency). "
448
      ]
449
    },
450
    {
451
      "cell_type": "code",
452
      "metadata": {
453
        "colab": {
454
          "base_uri": "https://localhost:8080/"
455
        },
456
        "id": "AB0onoOlVT-e",
457
        "outputId": "a072d8a6-3b3b-4066-8881-8af9a8b96608"
458
      },
459
      "source": [
460
        "task1 = Task([Classify(model1), Classify(model2), Classify(model3)], concurrency=\"thread\")\n",
461
        "task2 = Task([lambda x: torch.sigmoid(x).mean(axis=1).cpu().numpy()])\n",
462
        "\n",
463
        "workflow = Workflow([task1, task2], 250)\n",
464
        "\n",
465
        "start = time.time()\n",
466
        "for x in workflow(tokens):\n",
467
        "  pass\n",
468
        "\n",
469
        "print(f\"Took {time.time() - start} seconds\")"
470
      ],
471
      "execution_count": null,
472
      "outputs": [
473
        {
474
          "output_type": "stream",
475
          "name": "stdout",
476
          "text": [
477
            "Took 85.21102929115295 seconds\n"
478
          ]
479
        }
480
      ]
481
    },
482
    {
483
      "cell_type": "markdown",
484
      "metadata": {
485
        "id": "5s2KexhG_udx"
486
      },
487
      "source": [
488
        "In this case, concurrency doesn't improve performance. While the [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) is a factor, a bigger factor is that the GPU is already fully loaded. This method would be more beneficial if the system had a second GPU or the primary GPU had idle cycles. "
489
      ]
490
    },
491
    {
492
      "cell_type": "markdown",
493
      "metadata": {
494
        "id": "EoQFEi_61P9O"
495
      },
496
      "source": [
497
        "# Wrapping up\n",
498
        "\n",
499
        "This notebook introduced a number of different ways to work with large-scale tensor data and process it efficiently. This notebook purposely didn't cover embeddings and pipelines to demonstrate how workflows can stand on their own. In addition to workflows, this notebook covered efficient methods to work with large tensor arrays in PyTorch and NumPy."
500
      ]
501
    }
502
  ]
503
}
504

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.