Week 4 Excercise 1 - compute_content_cost

can anyone pls suggest what is the mistake i made from my code pls?

i did all mentioned with guide .


InternalError Traceback (most recent call last)
in
1 tf.random.set_seed(1)
----> 2 a_C = tf.random.normal([1, 1, 4, 4, 3], mean=1, stddev=4)
3 a_G = tf.random.normal([1, 1, 4, 4, 3], mean=1, stddev=4)
4 J_content = compute_content_cost(a_C, a_G)
5 J_content_0 = compute_content_cost(a_C, a_C)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 “”“Call target, and fall back on dispatchers if there is a TypeError.”""
200 try:
→ 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/random_ops.py in random_normal(shape, mean, stddev, dtype, seed, name)
87 “”"
88 with ops.name_scope(name, “random_normal”, [shape, mean, stddev]) as name:
—> 89 shape_tensor = tensor_util.shape_tensor(shape)
90 mean_tensor = ops.convert_to_tensor(mean, dtype=dtype, name=“mean”)
91 stddev_tensor = ops.convert_to_tensor(stddev, dtype=dtype, name=“stddev”)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py in shape_tensor(shape)
1027 # not convertible to Tensors because of mixed content.
1028 shape = tuple(map(tensor_shape.dimension_value, shape))
→ 1029 return ops.convert_to_tensor(shape, dtype=dtype, name=“shape”)
1030
1031

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1497
1498 if ret is None:
→ 1499 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1500
1501 if ret is NotImplemented:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
336 as_ref=False):
337 _ = as_ref
→ 338 return constant(v, dtype=dtype, name=name)
339
340

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)
262 “”"
263 return _constant_impl(value, dtype, shape, name, verify_shape=False,
→ 264 allow_broadcast=True)
265
266

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
273 with trace.Trace(“tf.constant”):
274 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
→ 275 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
276
277 g = ops.get_default_graph()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
298 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
299 “”“Implementation of eager constant.”""
→ 300 t = convert_to_eager_tensor(value, ctx, dtype)
301 if shape is None:
302 return t

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
95 except AttributeError:
96 dtype = dtypes.as_dtype(dtype).as_datatype_enum
—> 97 ctx.ensure_initialized()
98 return ops.EagerTensor(value, ctx.device_name, dtype)
99

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py in ensure_initialized(self)
537 if self._use_tfrt is not None:
538 pywrap_tfe.TFE_ContextOptionsSetTfrt(opts, self._use_tfrt)
→ 539 context_handle = pywrap_tfe.TFE_NewContext(opts)
540 finally:
541 pywrap_tfe.TFE_DeleteContextOptions(opts)

InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory

Hey @Lucy_Hui,
This error usually comes up when the server is under considerable load. You can simply restart your kernel a few times and it should work just fine. For further reference, check this thread out.

Just a future suggestion. You can always try to find the issue that you are facing on Discourse, since, it is highly possible that someone else might have asked the same question and it has been resolved. It will save you a great deal of time.

Regards,
Elemento