Week 3 - TensorFlow video Adam optimizer does not have minimize function

In Week 3 video on TensorFlow, Andrew demonstrated towards the end using the tf.keras.optimizers.Adam to call the function minimize. I was trying to follow through using Jupyter notebook in VS Code. The TensorFlow version in my virtual environment is version 2.20. The kernel reports the following error when running:

----> 9 optimizer.minimize(cost_fn, [w]) 10 print(w) AttributeError: ‘Adam’ object has no attribute ‘minimize’

These courses (including the assignments) use the versions of all the various packages that were current at the time they were published, which is April 2021 for most of DLS. I just checked the TensorFlow Introduction assignment in Week 3 of DLS Course 2 and it uses TF 2.3.0.

There is no guarantee that TF has stayed the same between 2.3 and 2.20. In the ML/DL/python world, backwards compatibility of APIs does not seem to be “a thing” unfortunately.

But that having been said, the TF APIs are also pretty flexible and frequently offer multiple ways to accomplish similar goals. We will see lots of examples as we go through the remaining courses in DLS. So there are several ways to proceed:

  1. “Hold that thought” and just proceed with DLS C2 and watch how things work in the assignments. You’ll see a lot more complex examples in C4 (ConvNets) and C5 (Sequence Models), but they will still be TF 2.3 or thereabouts. Note that DLS C3 is not a programming course, so it doesn’t talk more about TF.
  2. Take a look at the TensorFlow documentation site if you want to proceed immediately with building your own models using current versions of TF. You can also find the old versions of the documentation if you are curious. E.g. here’s the top level of the TF tutorial site.
3 Likes

And now that I think \epsilon more about it, I don’t recall any instances in which we directly call “minimize()” in the later TF based assignments. The typical pattern is that you define the optimizer function and the accuracy metric, call “compile()” on the defined model and then call “fit()” to perform the actual training/minimization.

2 Likes

Here’s one way to minimize a custom function:

import tensorflow as tf

print(tf.__version__)

x = tf.Variable(.0)
optimizer = tf.keras.optimizers.Adam()
cost = None
num_iterations = 0
while cost != 0:
    num_iterations += 1
    with tf.GradientTape() as tape:
        cost = x ** 2 - 10 * x + 25 # custom function
        gradient = tape.gradient(cost, x)
        optimizer.apply([gradient], [x])
print(x)
print(f'num iterations = {num_iterations}')

Output:

2.19.0
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=4.998349666595459>
num iterations = 9083
2 Likes