Mastering PyTorch is a book written by Ashish Ranjan Jha, published by Packt in early 2021 that covers a variety of topics, from training to the deployment of deep learning models developed using PyTorch.
Last September, I received a complementary/review copy of the book from Packt (thanks Shifa!). My initial plan was to work through this book chapter-by-chapter and publish a podcast episode describing my likings and criticism of the book. But that didn’t pan out as expected since I was moving a lot and got absorbed in a few personal projects. Regardless, I managed to spend roughly 2hrs per week on this book and have covered a solid chunk of it as of now (Sections 1, 2, 4). I think I’m ready to write a review!
The good things
The book is clearly divided into 4 sections: Section 1 was all about warming up and getting excited about the possibilities by rapidly going over the basics and jumping into a few common examples. The most interesting part of this section was the detailed coverage of the CNN-LSTM architecture, which is used in auto-captioning of images. Jupyter notebooks to code along are also available in the GitHub repository for the book. Section 2 sets the tone of the book by going back in time and starting off from some of the earliest vision models — LeNet, AlexNet, and so on — up to the current SOTA architectures. I found the accompanying architecture diagrams thorough and self-explanatory. The final chapter of the section on Hybrid models is the bomb! Section 3 dives deeper into GANs (I skipped this part). Section 4 is the most valuable chunk of the book in my opinion. If you have some experience with PyTorch and just want to learn how to deploy your models, this is probably the most interesting bit of the book. A huge chunk of this section is dedicated to serving and cloud deployment of your models.
Here are some areas that could’ve been improved by the author and the publisher.
The brief overview of the basics sometimes is just too brief — it does make sense I guess since the book is meant for an intermediate reader. That being said, some sections of the book try to explain what a bash command does, which is not required at this level! Who is the target audience here?
Beyond Section 1, every attempt at code explanations was futile. Most times even common import statements were explained, while skipping important code sections. For example, to get the
hymenoptera_data, the author asks to get the data from Kaggle — but does not explain how to do that. Getting the data from Kaggle requires setting up an API and so on which is much more complicated than a bash command. Again, who is the target audience here?
Sections on bidirectional LSTMs and multidimensional RNNs were only described in a few sentences. Out of nowhere, multidimensional LSTM were also mentioned and assumed to be known beforehand. I felt as if the author grew impatient through these sections.
In terms of chapter order in Section 4, to build on the previous sections, I felt Chapters 11-14 (distributed training, AutoML, Explainable AI, Rapip prototyping), could’ve appeared earlier, i.e. Chapter 10 on models in production could’ve come at the very end.
Some general comments:
- Module versions and requirements should’ve been placed in the Preface of the book rather than repeating it in each Chapter.
- It would have really helped to have equation numbers, especially for the sections that describe Optimizers.
- The summary section at the end of each chapter is almost always not helpful or needed. It could’ve been replaced with an exercise maybe?
- Overall, there is also a general lack of docstrings at many places, making it difficult to try out the Jupyter notebooks independently. This is so unlike other Packt publications that I’ve read of the same genre.
Not really sure why the book is printed in BW when the author clearly refers to some architectures using color (for example Fig 4.7 LSTM or Fig 5.1 on Transformers). At close to 40 EUR for the paperback, a color version is expected.
The code sections are printed in black monospaced text over (somewhat dark) grey blocks. This combination is odd and makes it difficult to read — other publishers of the same genre have somehow figured a better way to do this!
Sometimes code snippets or Jupyter console outputs appear as images! (for example Fig 1.14-1.18), this is also odd — the code was anyway given in grey blocks, why not provide the stdouts there itself?
Notes, citations of datasets and references are placed in separate square blocks or within parentheses. Footnotes or chapter-wise references would’ve been better suited.
The author recommends this book only if you already have some experience with deep learning. Should you get this book: Yes! The book can be trimmed at many places and the formatting can be really improved, saving a lot of real estate — Nevertheless, the examples provided are solid and works without bugs (unlike many other books). The real offering of the book is the same — working examples of deep learning models over various domains using efficient PyTorch code. If you are comfortable reading books in digital format, get that over the printed version for the color images or get the Packt Subscription to access all of their books.