What do you think of PyTorch vs TensorFlow?
Have you seen this post ?
Hopefully, as Balaji pointed out, Batch will have a comprehensive answer very soon.
For now to summarize, you can use either of those frameworks, both have very similar capabilities at this point of time. The use of PyTorch is very high in research community while Tensorflow still dominates the commercial setting because of the additional features which the Tensorflow community provides for deploying the models, like TensorflowServing, Tensorflow extended etc.
Hi @Clarissa @SomeshChatterjee,
Could you both please upvote my batch post or post a new reply on the thread I gave you?
This should provide a better signal to the moderators that many people are interested in seeing this.
@Clarissa Thanks for coming up with the thread.
My personal choice is Pytorch.
I think I worried too much about this on my AI journey (not to imply I’m an expert in either).
A summary of my findings:
- TensorFlow is more popular in commercial businesses
- PyTorch is more taught in academia and used in research papers
- It appears that TensorFlow is speeding up a bit in the latest StackOverflow survey results, maybe ever since TF 2.0 updates?
Personally, I found the decision was paralyzing me and so my goal is to try to learn both. My experience with frameworks is once you know one, usually the other is quite similar, syntax differences.
To my knowledge it’s not even a question. Key points:
- From this source:
When we compare HuggingFace model availability for PyTorch vs TensorFlow, the results are staggering. Below we see a chart of the total number of models available on HuggingFace that are either PyTorch or TensorFlow exclusive, or available for both frameworks. As we can see, the number of models available for use exclusively in PyTorch absolutely blows the competition out of the water. Almost 92% of models are PyTorch exclusive, up from 85% last year. In contrast, only about 8% being TensorFlow exclusive, with only about 14% of all models available for TensorFlow (down from 16% last year)
and the trend got worse for TensorFlow
- Also from Lukas Biewald - nobody is using Tensorflow anymore
End of the article you provided shares some good advice on when to pick pytorch over tensorflow. As far as industry standard is concerned, tensorflow will be preferred over pytorch. Here are some reasons for picking tensorflow other than what was shared on the article:
- You get to focus on the problem than figuring out how to wire the layers. Consider a conv layer followed by a flatten and a linear layer. You’ve got to make additional effort in figuring out input shape to this final linear layer in pytorch. In tensorflow, you get to focus on building the model and the framework does the work for you. Try building an LSTM / RNN based architecture and you’ll observe a similar problem.
- Tensorflow brings a lot to the table within a single package:
a. Consider getting model summary. In tensorflow you can invoke
print(model.summary())to view the results. In pytorch, you’ve got to install another library to view the results.
b. There is no builtin pytorch code that does
model.fitlike tensorflow. Unless you are willing to write a custom training loop, you’ll end up installing a 3rd party package which is hosted as a seperate repository on github.
As far as usage statistics go, companies needn’t share their models due to IP. So, it’s wrong to say that tensorflow is out for good.
As far as universities are concerned, Pytorch is used widely because universities teach this framework. My guesses are limited to:
- Tensorflow 1 was hard and pytorch was better at that point. So, universities that picked pytorch decided not to move.
- Meta or companies using pytorch probably due to the above reason are probably funding research projects.
- Learners are influenced by the bad taste of tensorflow 1 and are not curious about tensorflow 2.
- Dependency on other SOTA models that are pytorch only.
Getting back to the pytorch only sota models, one must consider the model size and compute requirements when it comes to using them.
Just the way pytorch 2.0 has a lot of improvements, I’m curious to find how tensorflow will integrate flax / jax and provide a rich developer experience keeping performance in mind.
I don’t want to get further into the discussion (since I have a clear bias and there’s nothing to be gained here) but I just couldn’t resist to respond to this point:
To me there is nothing worse that not knowing the dimensions. And also this is no longer true, since PyTorch introduced “lazy” layers like LazyLinear for convenience (some points when it’s useful) but I still use conventional layers - to me, there’s nothing more important than knowing the dimensions the model is working with. In other words, if I don’t know the dimensions of the tensors at any given time (1D, 2D, 3D, 4D … and concrete shapes) - then I don’t care about anything else. (period )
The image that comes to mind thinking for an analogy is the beginner standing next to a cliff and looking at his phone for navigation … “how many “units” do I need for the output?..”
One additional point - I haven’t looked deeper into it, but there’s a promising (promises) programming language for AI called Mojo. If it lives up to its promises I think it could shorten the Tensorflow “altavista” fate
LazyLinear on torch version 2.0.0+cu118 on google colab and 2.0.1 on my machine leads to this warning:
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/lazy.py:180: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment.
warnings.warn('Lazy modules are a new feature under heavy development '
This is a good direction for pytorch in general.
@balaji.ambresh hmm… that’s strange, it feels like I haven’t seen this message for ages Lazy layers were introduced with PyTorch v1.8 (something like 2 years ago…)
I had the same question. I used Tensorflow before and I really admired it. Your point is taken about PyTorch because I am in the “Generative AI with LLM’s” right now, and we are using PyTorch. But I know that Dr. Ng is a Google alumni, and there do not seem to be any series of courses on PyTorch. I am not giving up on TensorFlow.