I have a problem with the TPUs on the C4_W4_Lab_3_CelebA_GAN_Experiments notebook. First of all tensorflow is not installed by default whenusing TPU, so I downloaded it using !pip install tensorflow (I also tried with ==2.16.1 and 2.18). Than I get this error:
TPUs not found in the cluster. Failed in initialization: No OpKernel was registered to support Op ‘ConfigureDistributedTPU’ used by {{node ConfigureDistributedTPU}} with these attrs: [tpu_cancellation_closes_chips=2, embedding_config=“”, tpu_embedding_config=“”, compilation_failure_closes_chips=false, enable_whole_mesh_compilations=false, is_global_init=false]
Registered devices: [CPU]
Registered kernels:
** **
** [[ConfigureDistributedTPU]] [Op:__inference__tpu_init_fn_4]**
When running this code:
try:
tpu_cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=‘local’) # TPU detection
tf.config.experimental_connect_to_cluster(tpu_cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(tpu_cluster_resolver)
print("All devices: ", tf.config.list_logical_devices(‘TPU’))
print(f’Running on a TPU w/{tpu_cluster_resolver.num_accelerators()[“TPU”]} cores’)
except ValueError:
raise BaseException(‘ERROR: Not connected to a TPU runtime; please make sure you have successfully chosen TPU runtime from the Edit/Notebook settings menu’)
strategy = tf.distribute.TPUStrategy(tpu_cluster_resolver)
I also tryed print(os.environ.get(‘COLAB_TPU_ADDR’)) and got None. I had the same problem with the official tensorflow notebook to set up TPUs at Google Colab
I have realy no idea how to solve this issue.