Keras tuner deficiencies

Are there any courses through DeepLearning.AI that concentrate on the advanced usage of keras tuner (KT)? I’ve been using KT for a few months now and there are certain aspects that I simply can’t figure out. A multitude of Google searches has not been of any assistance. As an example, I’ve developed a convolutional neural network and my 20 dimensional hyperparameter space has approximately 2 billion unique combinations. I’m using Bayesian optimization within KT, and I’ve set the maximum number of trials to 1E6. The tuning session has been running for about one month and I’ve noticed that certain hyperparameter combinations are now being repeated. I would really like to find out why this is happening. I have an early stopping callback for each model/trial with a patience of 3, but I’m starting to think that there is there no stopping criterion for KT? Will KT run all 1E6 trials? I would hope that KT has implemented some kind of convergence test. I’ve posted this query on multiple tensorflow blogs, but I’ve yet to receive any responses. Any help at all would be GREATLY appreciated. Thanks in advance.

The MLOps specialization course 3 talks about Keras Tuner, but I am not sure if its that advanced. You can have a look on it. If certain parameters are not improved with patience of 3 in some setting it should exit the search but Im not sure how the setting should be set. Also i would suggest to perform a limited search unless you have a lot of computation power.

Also the practical data science specialization has a section for tuning and can be used with AWS computing resouces. I believe it was course 3, you can have a look.