diff --git a/08_pytorch_paper_replicating.ipynb b/08_pytorch_paper_replicating.ipynb index 05518f19..e89a70c9 100644 --- a/08_pytorch_paper_replicating.ipynb +++ b/08_pytorch_paper_replicating.ipynb @@ -4374,7 +4374,7 @@ "source": [ "## Extra-curriculum\n", "\n", - "* There have been several iterations and tweaks to the Vision Transformer since its original release and the most concise and best performing (as of July 2022) can be viewed in [*Better plain ViT baselines for ImageNet-1k*](https://arxiv.org/abs/2205.01580). Depsite of the upgrades, we stuck with replicating a \"vanilla Vision Transformer\" in this notebook because if you understand the structure of the original, you can bridge to different iterations.\n", + "* There have been several iterations and tweaks to the Vision Transformer since its original release and the most concise and best performing (as of July 2022) can be viewed in [*Better plain ViT baselines for ImageNet-1k*](https://arxiv.org/abs/2205.01580). Despite of the upgrades, we stuck with replicating a \"vanilla Vision Transformer\" in this notebook because if you understand the structure of the original, you can bridge to different iterations.\n", "* The [`vit-pytorch` repository on GitHub by lucidrains](https://github.com/lucidrains/vit-pytorch) is one of the most extensive resources of different ViT architectures implemented in PyTorch. It's a phenomenal reference and one I used often to create the materials we've been through in this chapter. \n", "* PyTorch have their [own implementation of the ViT architecture on GitHub](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py), it's used as the basis of the pretrained ViT models in `torchvision.models`.\n", "* Jay Alammar has fantastic illustrations and explanations on his blog of the [attention mechanism](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/) (the foundation of Transformer models) and [Transformer models](https://jalammar.github.io/illustrated-transformer/). \n",