diff --git a/08_pytorch_paper_replicating.ipynb b/08_pytorch_paper_replicating.ipynb index b31fb04f..05518f19 100644 --- a/08_pytorch_paper_replicating.ipynb +++ b/08_pytorch_paper_replicating.ipynb @@ -4321,7 +4321,7 @@ "## Main takeaways\n", "\n", "* With the explosion of machine learning, new research papers detailing advancements come out every day. And it's impossible to keep up with it *all* but you can narrow things down to your own use case, such as what we did here, replicating a computer vision paper for FoodVision Mini.\n", - "* Machine learning research papers are often contain months of research by teams of smart people compressed into a few pages (so teasing out all the details and replicating the paper in full can be a bit of challenge). \n", + "* Machine learning research papers often contain months of research by teams of smart people compressed into a few pages (so teasing out all the details and replicating the paper in full can be a bit of challenge). \n", "* The goal of paper replicating is to turn machine learning research papers (text and math) into usable code.\n", " * With this being said, many machine learning research teams are starting to publish code with their papers and one of the best places to see this is at [Paperswithcode.com](https://paperswithcode.com/)\n", "* Breaking a machine learning research paper into inputs and outputs (what goes in and out of each layer/block/model?) and layers (how does each layer manipulate the input?) and blocks (a collection of layers) and replicating each part step by step (like we've done in this notebook) can be very helpful for understanding.\n",