transformers

Форк
0

README.md

🤗 Transformers Notebooks

You can find here a list of the official notebooks provided by Hugging Face.

Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging 🤗 Transformers and would like to be listed here, please open a Pull Request so it can be included under the Community notebooks.

Hugging Face's notebooks 🤗

Documentation notebooks

You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them:

NotebookDescription
Quicktour of the libraryA presentation of the various APIs in TransformersOpen in ColabOpen in AWS Studio
Summary of the tasksHow to run the models of the Transformers library task by taskOpen in ColabOpen in AWS Studio
Preprocessing dataHow to use a tokenizer to preprocess your dataOpen in ColabOpen in AWS Studio
Fine-tuning a pretrained modelHow to use the Trainer to fine-tune a pretrained modelOpen in ColabOpen in AWS Studio
Summary of the tokenizersThe differences between the tokenizers algorithmOpen in ColabOpen in AWS Studio
Multilingual modelsHow to use the multilingual models of the libraryOpen in ColabOpen in AWS Studio

PyTorch Examples

Natural Language Processing[[pytorch-nlp]]

NotebookDescription
Train your tokenizerHow to train and use your very own tokenizerOpen in ColabOpen in AWS Studio
Train your language modelHow to easily start using transformersOpen in ColabOpen in AWS Studio
How to fine-tune a model on text classificationShow how to preprocess the data and fine-tune a pretrained model on any GLUE task.Open in ColabOpen in AWS Studio
How to fine-tune a model on language modelingShow how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task.Open in ColabOpen in AWS Studio
How to fine-tune a model on token classificationShow how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS).Open in ColabOpen in AWS Studio
How to fine-tune a model on question answeringShow how to preprocess the data and fine-tune a pretrained model on SQUAD.Open in ColabOpen in AWS Studio
How to fine-tune a model on multiple choiceShow how to preprocess the data and fine-tune a pretrained model on SWAG.Open in ColabOpen in AWS Studio
How to fine-tune a model on translationShow how to preprocess the data and fine-tune a pretrained model on WMT.Open in ColabOpen in AWS Studio
How to fine-tune a model on summarizationShow how to preprocess the data and fine-tune a pretrained model on XSUM.Open in ColabOpen in AWS Studio
How to train a language model from scratchHighlight all the steps to effectively train Transformer model on custom dataOpen in ColabOpen in AWS Studio
How to generate textHow to use different decoding methods for language generation with transformersOpen in ColabOpen in AWS Studio
How to generate text (with constraints)How to guide language generation with user-provided constraintsOpen in ColabOpen in AWS Studio
ReformerHow Reformer pushes the limits of language modelingOpen in ColabOpen in AWS Studio

Computer Vision[[pytorch-cv]]

NotebookDescription
How to fine-tune a model on image classification (Torchvision)Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image ClassificationOpen in ColabOpen in AWS Studio
How to fine-tune a model on image classification (Albumentations)Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image ClassificationOpen in ColabOpen in AWS Studio
How to fine-tune a model on image classification (Kornia)Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image ClassificationOpen in ColabOpen in AWS Studio
How to perform zero-shot object detection with OWL-ViTShow how to perform zero-shot object detection on images with text queriesOpen in ColabOpen in AWS Studio
How to fine-tune an image captioning modelShow how to fine-tune BLIP for image captioning on a custom datasetOpen in ColabOpen in AWS Studio
How to build an image similarity system with TransformersShow how to build an image similarity systemOpen in ColabOpen in AWS Studio
How to fine-tune a SegFormer model on semantic segmentationShow how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic SegmentationOpen in ColabOpen in AWS Studio
How to fine-tune a VideoMAE model on video classificationShow how to preprocess the data and fine-tune a pretrained VideoMAE model on Video ClassificationOpen in ColabOpen in AWS Studio

Audio[[pytorch-audio]]

NotebookDescription
How to fine-tune a speech recognition model in EnglishShow how to preprocess the data and fine-tune a pretrained Speech model on TIMITOpen in ColabOpen in AWS Studio
How to fine-tune a speech recognition model in any languageShow how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common VoiceOpen in ColabOpen in AWS Studio
How to fine-tune a model on audio classificationShow how to preprocess the data and fine-tune a pretrained Speech model on Keyword SpottingOpen in ColabOpen in AWS Studio

Biological Sequences[[pytorch-bio]]

NotebookDescription
How to fine-tune a pre-trained protein modelSee how to tokenize proteins and fine-tune a large pre-trained protein "language" modelOpen in ColabOpen in AWS Studio
How to generate protein foldsSee how to go from protein sequence to a full protein model and PDB fileOpen in ColabOpen in AWS Studio
How to fine-tune a Nucleotide Transformer modelSee how to tokenize DNA and fine-tune a large pre-trained DNA "language" modelOpen in ColabOpen in AWS Studio
Fine-tune a Nucleotide Transformer model with LoRATrain even larger DNA models in a memory-efficient wayOpen in ColabOpen in AWS Studio

Other modalities[[pytorch-other]]

NotebookDescription
Probabilistic Time Series ForecastingSee how to train Time Series Transformer on a custom datasetOpen in ColabOpen in AWS Studio

Utility notebooks[[pytorch-utility]]

NotebookDescription
How to export model to ONNXHighlight how to export and run inference workloads through ONNXOpen in ColabOpen in AWS Studio
How to use BenchmarksHow to benchmark models with transformersOpen in ColabOpen in AWS Studio

TensorFlow Examples

Natural Language Processing[[tensorflow-nlp]]

NotebookDescription
Train your tokenizerHow to train and use your very own tokenizerOpen in ColabOpen in AWS Studio
Train your language modelHow to easily start using transformersOpen in ColabOpen in AWS Studio
How to fine-tune a model on text classificationShow how to preprocess the data and fine-tune a pretrained model on any GLUE task.Open in ColabOpen in AWS Studio
How to fine-tune a model on language modelingShow how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task.Open in ColabOpen in AWS Studio
How to fine-tune a model on token classificationShow how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS).Open in ColabOpen in AWS Studio
How to fine-tune a model on question answeringShow how to preprocess the data and fine-tune a pretrained model on SQUAD.Open in ColabOpen in AWS Studio
How to fine-tune a model on multiple choiceShow how to preprocess the data and fine-tune a pretrained model on SWAG.Open in ColabOpen in AWS Studio
How to fine-tune a model on translationShow how to preprocess the data and fine-tune a pretrained model on WMT.Open in ColabOpen in AWS Studio
How to fine-tune a model on summarizationShow how to preprocess the data and fine-tune a pretrained model on XSUM.Open in ColabOpen in AWS Studio

Computer Vision[[tensorflow-cv]]

NotebookDescription
How to fine-tune a model on image classificationShow how to preprocess the data and fine-tune any pretrained Vision model on Image ClassificationOpen in ColabOpen in AWS Studio
How to fine-tune a SegFormer model on semantic segmentationShow how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic SegmentationOpen in ColabOpen in AWS Studio

Biological Sequences[[tensorflow-bio]]

NotebookDescription
How to fine-tune a pre-trained protein modelSee how to tokenize proteins and fine-tune a large pre-trained protein "language" modelOpen in ColabOpen in AWS Studio

Utility notebooks[[tensorflow-utility]]

NotebookDescription
How to train TF/Keras models on TPUSee how to train at high speed on Google's TPU hardwareOpen in ColabOpen in AWS Studio

Optimum notebooks

🤗 Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.

NotebookDescription
How to quantize a model with ONNX Runtime for text classificationShow how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task.Open in ColabOpen in AWS Studio
How to quantize a model with Intel Neural Compressor for text classificationShow how to apply static, dynamic and aware training quantization on a model using Intel Neural Compressor (INC) for any GLUE task.Open in ColabOpen in AWS Studio
How to fine-tune a model on text classification with ONNX RuntimeShow how to preprocess the data and fine-tune a model on any GLUE task using ONNX Runtime.Open in ColabOpen in AWS Studio
How to fine-tune a model on summarization with ONNX RuntimeShow how to preprocess the data and fine-tune a model on XSUM using ONNX Runtime.Open in ColabOpen in AWS Studio

Community notebooks:

More notebooks developed by the community are available here.

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.