C3_W3_A1_Assigment - Reinforcement Learning - AttributeError: 'builtin_function_or_method' object has no attribute '__code__'

This issue is specific to the Step 9 on the read only cell. When I run it it give theerrors below. Note this is a cell that dies not require for me to code anything. it is a read only cell with code that only needs to be executed.

it seems to be some sort of incompatibility between libraries.
Thanks in advance for any support.

See the tracebacks below:

WARNING:tensorflow:AutoGraph could not transform <bound method b2World.CreateBody of b2World(autoClearForces=True,
bodies=[b2Body(active=True,
angle=0.0,
angularDamping=0.0,
angularVelocity=0.0,
awake=True,
bullet=False,
contacts=[b2ContactEdge(contact=b2Contact(childIndexA=0,... )],
bodyCount=25,
contactCount=6,
contactFilter=None,
contactListener=ContactDetector(),
contactManager=b2ContactManager(allocator=<Swig Object of type 'b2BlockAllocator *' at 0x7029e217ef30>,
broadPhase=proxyCount=35,),
contactCount=6,... ),
contacts=[b2Contact(childIndexA=0,
childIndexB=0,
enabled=True,
fixtureA=b2Fixture(body=b2Body(active=True,
angle=0.0,... )],
continuousPhysics=True,
destructionListener=None,
gravity=b2Vec2(0,-10),
jointCount=2,
joints=[b2RevoluteJoint(active=True,
anchorA=b2Vec2(11.6151,13.4853),
anchorB=b2Vec2(11.6151,13.4853),
angle=0.3959982693195343,
bodyA=b2Body(active=True,... )],
locked=False,
proxyCount=35,
renderer=None,
subStepping=False,
warmStarting=True,
)> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Mangled names are not yet supported by AutoGraph
WARNING: AutoGraph could not transform <bound method b2World.CreateBody of b2World(autoClearForces=True,
bodies=[b2Body(active=True,
angle=0.0,
angularDamping=0.0,
angularVelocity=0.0,
awake=True,
bullet=False,
contacts=[b2ContactEdge(contact=b2Contact(childIndexA=0,... )],
bodyCount=25,
contactCount=6,
contactFilter=None,
contactListener=ContactDetector(),
contactManager=b2ContactManager(allocator=<Swig Object of type 'b2BlockAllocator *' at 0x7029e7164030>,
broadPhase=proxyCount=35,),
contactCount=6,... ),
contacts=[b2Contact(childIndexA=0,
childIndexB=0,
enabled=True,
fixtureA=b2Fixture(body=b2Body(active=True,
angle=0.0,... )],
continuousPhysics=True,
destructionListener=None,
gravity=b2Vec2(0,-10),
jointCount=2,
joints=[b2RevoluteJoint(active=True,
anchorA=b2Vec2(11.6151,13.4853),
anchorB=b2Vec2(11.6151,13.4853),
angle=0.3959982693195343,
bodyA=b2Body(active=True,... )],
locked=False,
proxyCount=35,
renderer=None,
subStepping=False,
warmStarting=True,
)> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Mangled names are not yet supported by AutoGraph
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-29-937e1e128d98> in <module>
44 # Set the y targets, perform a gradient descent step,
45 # and update the network weights.
---> 46 agent_learn(experiences, GAMMA)
47
48 state = next_state.copy()

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
566 xla_context.Exit()
567 else:
--> 568 result = self._call(*args, **kwds)
569
570 if tracing_count == self._get_tracing_count():

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
604 # In this case we have not created variables on the first call. So we can
605 # run the first trace but we should fail if variables are created.
--> 606 results = self._stateful_fn(*args, **kwds)
607 if self._created_variables:
608 raise ValueError("Creating variables on a non-first call to a function"

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/[function.py](https://function.py) in __call__(self, *args, **kwargs)
2360 """Calls a graph function specialized to the inputs."""
2361 with self._lock:
-> 2362 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
2363 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
2364

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/[function.py](https://function.py) in _maybe_define_function(self, args, kwargs)
2701
2702 self._function_cache.missed.add(call_context_key)
-> 2703 graph_function = self._create_graph_function(args, kwargs)
2704 self._function_cache.primary[cache_key] = graph_function
2705 return graph_function, args, kwargs

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/[function.py](https://function.py) in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2591 arg_names=arg_names,
2592 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2593 capture_by_value=self._capture_by_value),
2594 self._function_attributes,
2595 # Tell the ConcreteFunction to clean up its graph once it goes out of

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
976 converted_func)
977
--> 978 func_outputs = python_func(*func_args, **func_kwargs)
979
980 # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
437 # __wrapped__ allows AutoGraph to swap in a converted function. We give
438 # the function a weak reference to itself to avoid a reference cycle.
--> 439 return weak_wrapped_fn().__wrapped__(*args, **kwds)
440 weak_wrapped_fn = weakref.ref(wrapped_fn)
441

/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise

AttributeError: in converted code:

<ipython-input-14-67679b154bf0>:14 agent_learn *
loss = compute_loss(experiences, gamma, q_network, target_q_network)
<ipython-input-12-72b5a52b4536>:31 compute_loss *
next_state, reward, done, _ = env.step(action)
/opt/conda/lib/python3.7/site-packages/gym/wrappers/time_limit.py:49 step *
observation, reward, done, info = self.env.step(action)
/opt/conda/lib/python3.7/site-packages/gym/wrappers/order_enforcing.py:37 step *
return self.env.step(action)
/opt/conda/lib/python3.7/site-packages/gym/envs/box2d/lunar_lander.py:518 step *
p.ApplyLinearImpulse(
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/[api.py](https://api.py):458 converted_call
if not options.user_requested and conversion.is_whitelisted_for_graph(f):
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/[conversion.py](https://conversion.py):369 is_whitelisted_for_graph
if tf_inspect.isgeneratorfunction(o):
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/util/tf_inspect.py:387 isgeneratorfunction
return _inspect.isgeneratorfunction(tf_decorator.unwrap(object)[1])
/opt/conda/lib/python3.7/[inspect.py](https://inspect.py):177 isgeneratorfunction
object.__code__.co_flags & CO_GENERATOR)

AttributeError: 'builtin_function_or_method' object has no attribute '__code__'

Are you running the notebook in the Coursera Labs environment, or somewhere else?

straight from Coursera lab assignment environment.

by the way, I did some additional debugging and the problem pops up when the function “Agent_learn” is called. it fails when “Compute_Loss” function is being called.

def agent_learn(experiences, gamma):
    """
    Updates the weights of the Q networks.
    
    Args:
      experiences: (tuple) tuple of ["state", "action", "reward", "next_state", "done"] namedtuples
      gamma: (float) The discount factor.
    
    """
    
    # Calculate the loss
    print(f"Agent-learn-Calc Loss")
    with tf.GradientTape() as tape:
        loss = compute_loss(experiences, gamma, q_network, target_q_network)

to add more detail the function “Compute_loss” fails right in this instruction:

y_targets = rewards + (gamma * max_qsa * (1 - done_vals))

I hope this helps.

Perhaps you have made some other changes to the notebook which broke your compute_loss() function, or you otherwise modified something incorrectly.

I have tested my copy of the notebook and it is not throwing any errors or warnings when running the “9 - Train the Agent” cell. Note that training takes a long time and has not completed yet.

Thanks for your response,

Can I ask you to paste the “Compute_loss” function ? I have indeed used the lines suggested in the “hints” sections. here is my version.

Thanks,
Adrian

# mentor edit: code removed

In the future, please do not share your code on the forum - that is not allowed by the Code of Conduct.

Sending it via a personal message would be better (upon request, however).

I’ll give it a look and provide feedback, then delete the code from this thread.

Try removing the print() statements you’ve added, because they’re not correctly formatted f-strings.

  • If you want to print the value of a variable, you have to enclose it in curly-braces.

  • Adding print statements inside a function that runs very frequently will clog up the browser terminal output, and it can also make the grader mad.

Also, the cost function should not use env.step(action). Please delete that line.

The env.step() function is used in the agent training in Step 9. You don’t need that inside the cost function (and the Hints do not mention doing so).