Unable to Generate My Own Art with Neural Style Transfer Code

I passed all the tests for the Art_Generation_with_Neural_Style_Transfer lab, and submitted for full credit. I already finished the certificate, but when I downloaded the files to run my own art, I can’t get it to run anymore. I’ve searched stack overflow but can’t find anything that can help me solve the issue. I would greatly appreciate any help. I get this error:

7 frames
/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/optimizer.py in _var_key(self, variable)
86 # is added to their handle tensors.
87 variable = variable.handle._distributed_container()
—> 88 return variable._unique_id
89
90 def _apply_weight_decay(self, variables):

AttributeError: in user code:

File "<ipython-input-21-5eba9767e718>", line 30, in train_step  *
    optimizer.apply_gradients([(grad, generated_image)])
File "/usr/local/lib/python3.10/dist-packages/keras/src/optimizers/base_optimizer.py", line 282, in apply_gradients  **
    self.apply(grads, trainable_variables)
File "/usr/local/lib/python3.10/dist-packages/keras/src/optimizers/base_optimizer.py", line 321, in apply
    self.build(trainable_variables)
File "/usr/local/lib/python3.10/dist-packages/keras/src/optimizers/adam.py", line 92, in build
    super().build(var_list)
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/tracking.py", line 26, in wrapper
    return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/keras/src/optimizers/base_optimizer.py", line 152, in build
    self._trainable_variables_indices[self._var_key(variable)] = i
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/optimizer.py", line 88, in _var_key
    return variable._unique_id

AttributeError: 'SymbolicTensor' object has no attribute '_unique_id'

To replicate the assignment in your local environment, you have to use the same versions of all the libraries as used in the Coursera environment. There is no official guide for this but you may find this post helpful.

1 Like

Hello Mr. saifkhanengr, thank you for your response, but the post says this about this particular project:
W4A2 - Deep Learning & Art: Neural Style Transfer

           This should be OK with no modifications.

I saw another post on these boards where at least one guy, was able to do his own art, so it seems like it was possible to run these files on local environments. I open it up once again and continue to seek assistance. Any possible ideas to try is appreciated. Thanks guys!

Please give us a reference for where it says that.

Did you actually read the thread that Saif gave about how to duplicate the environment?

But even if it was a true statement, say, a year ago, that someone got it to run with then current versions, the APIs continue to evolve. So just because someone else was able to get this to work with current versions at some point in the past does not guarantee that will continue to be true for the entire (as yet unwritten) future, right?

3 Likes

Hi Paul,

The post where the guy posts his Neural Style Transfer Art is here. (It’s by a user, avsharp) The part where the above quote is found is about a third of the way down in the post that Saif shared. Sorry Saif & Paul for my frustration. I had spent so many days and hours trying to find the answer, and I also was feeling really disappointed because after I finished the entire certificate, I was feeling like I had learned so much but at the same time had not learned enough to even be able to apply that knowledge to my own simple projects. Thank you guys for your help and guidance on these boards.

Don’t be disappointed.

Slowly Is the Fastest Way to Get to Where You Want to Be

I still don’t know how to replicate all the assignments in my local environment. And, frankly speaking, its not necessary all the time (at least in my case).

1 Like

Thanks for the quote, Saif, it really puts the journey into perspective. I’m considering trying to learn Neural Style Transfer with PyTorch, because many people in online discussion boards were criticizing TensorFlow because there doesn’t appear to be a lot of support, and I was also reading the opinion of some who believe that PyTorch is becoming ubiquitous. What do you think?

Learning PyTorch is a great idea just in general. If you’re going to be doing this as a career, knowing both PyTorch and TF is highly recommended. Just like learning more programming languages: it can’t hurt to have that on your resumé, right? As you have observed I’m sure, almost everything here at DLAI is done in TF, but there is one series that uses PyTorch: the GANs Specialization. It’s very interesting material in its own right, but may not be relevant to your career goals. But you could just sign up for GANs Course 1 in “audit” mode and just check out the material in Week 1 that gives you an “intro to PyTorch”. But I’m sure you can also find some good torch tutorials just with a quick google search.

I have to say I found torch a lot more natural and intuitive than TF. It only ever supported what TF calls “Eager Mode”, so you don’t have to deal with all the complexity of defining and then executing graphs.

1 Like

Hi Paul,

Thanks for the tips. I guess for the time being, I’ll continue studying TF. Especially, since I think that’s what they use in the Natural Language Processing certificate which was the one I wanted to do next. But I will start thinking about torch, maybe as a preliminary I’ll do what you said and check out the first week of GANS and after the NLP, see if I can find a thorough tutorial.

Any recommendations for getting a better handle on TF? I’m planning on checking out the TF documentation, but honestly, I don’t understand a lot of really technical manuals, I think it’s because I don’t have a comp-sci background, because I usually find myself learning from blogs like “geeksforgeeks” etc.

Consider taking TensorFlow Professional certificate and Advance TensorFlow. Both specializations are offer by Deeplearning.ai. Each has four courses (I guess).

Just trying to read the raw low level documentation for the various TF APIs is not the most efficient way to learn. The TF website also has lots of tutorials that approach the teaching from a problem solving point of view. E.g. here’s how to build and train a network to do X, Y or Z. Here’s the top level of the TF Tutorials section. Have a look there and see if you can find any topics that look appropriate for the types of problems you are interested in.

Of course Saif’s suggestion of taking advantage of the rich set of TF related courses here from DLAI is also an excellent approach. It just depends on how much time you have and what your goals are.

I think I will take those certificates. Maybe I’ll audit the first one.

Yeah, I think I’ll audit the first certificate. I read many of the reviews and some people were saying that the first couple of courses really just follow the TF tutorials. I find value in getting some guidance on those but I wouldn’t want to pay for it. The more advanced certificates seem really good, and more specific to TF, whereas many were saying that the beginning certificate is more about Keras.

@Hec_1 I think it is fair game for me to mention because it is not a topic covered here (though also, I had suggested to Paul it was something I greatly wanted to learn… but had no idea where to do so).

And I mean even these tutorials I will suggest are a lil out of date (circa 2022 ?). I mean I’ve known about Jeremy Howard for sometime (he basically was, among other things, the guy who founded Kaggle)-- But I thought they were most teaching towards his (free, mind you) Fast.ai library.

The other day someone I suggested I take another look, because that is not the case (and here in I’m only talking to Pt 2. of his free course-- On Stable Diffusion and diffusion models). I mean the very beginning goes into the high level, how to use the Stable Diffusion model in your code-- But then for basically every course after the first, it is about, starting from just the raw Python library entirely, how to build our own Diffusion model form the ground up.

I am really liking it thus far, and his teaching style reminds me a bit of Prof. Ng who was great as a teacher. But he also goes into explaining the Python code more, because he uses more complicated techniques.

Keep in mind, the course is something like 30 hours of video (only on diffusion models), is free-- but no certificate if that is what is important to you.

I am still working on it, a bit at a time so I can’t give a final verdict, though really liking so far.

I mean I think NST was a great to learn as part of the many facets of DLS, but as a technique for image generation… honestly, it is looking a little ‘long in the tooth’.

However, consider say GANs can be used for a lot more than just image gen (and he doesn’t cover that, that can only be found here).