# Define a simple sequential model
def create_model():
model = tf.keras.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
return model
# Create a basic model instance
model = create_model()

Then I can look at the architecture of the model
model.summary()

If I save this model as:
model.save(‘path/to/model’)

and I’ll look in this directory (‘path/to/model’)
it will be a set of files

I can also download the model from this directory and see summary()

The point is that a checkpoint is more limited that a full “save” of a model. The checkpoint only includes the current weights, but the full model includes other information as described in this page, which was part of the documentation link that I gave you on your other question thread:

If you are starting from nothing with just a checkpoint, you will need to figure out a way to load the other three categories of data first from a previous full “save”.

Thank you very much for your reply!
Of course, I understand that saving in the checkpoint format is more limited than fully “saving" the model.
This is actually the reason for the question - is there a process that will allow us to convert one format to another.
After all, after restoring the neural network from the “Checkpoint” format, this is actually a fully functional model, using which you can get the required inference. Why, in this case, is it impossible to save this (restored from checkpoint format) model to the fully “saving” the model?

No, it’s not. That’s the whole point: the weights do not include all the model architecture and structure, right? E.g. that doesn’t tell you what the activation functions are and what the loss function is and … It only gives you the W^{[l]} and b^{[l]} values (or perhaps additional values in an RNN or other type of network). That’s not enough to construct the full working model.

So you need to do a “load” from a previous full “save” and then you can load a checkpoint with just the weights.

That’s the problem… I don’t have any “previous” saves, I only have these four files as a result of saving the “Checkpoint”.
But… hmm… the pipeline.config file has all the necessary data - for example, this is a model from TensorFlow 2 Detection Model Zoo, and the file has the name of the model and, therefore, the entire architecture, and the rest of the files have the values W[l] and b[l], otherwise we would not be able to load weights without having the exact architecture of the model (for these weights and bias), and get a working model

If there is enough info in the checkpoint to find a file that actually has the full model data, then you can load that with the real “load” command (as opposed to “load checkpoint”) and then you’ve got the code and the model. Then you can load your checkpoint data and you’ve got everything.

But the higher level question is why do you only have a checkpoint? My interpretation is that means that the place you got that from does not know what they are doing. If they were trying to make their model usable by someone else, they need to supply more information, as we’ve been discussing. So this sounds sketchy to me and my question would be to step back and ask if there is a better way to get the model that you want. How did you find that checkpoint file and why do you believe that is what you want? Or at an even higher level: what is the problem you’re actually trying to solve here?

Yes, I understand what you mean.
But I proceed from the existing realities →
I have four files = the model is saved as a “Checkpoint”, that’s all I have.
But it seemed strange to me that after restoring the ‘detection_model’ from the Checkpoint, which allows me to get the correct inference, I cannot save this ‘detection_model’ like detection_model.save(‘path/to/model’)

But none of that answers any of my higher level questions. What I’m saying is that the “existing reality” needs to be adjusted. How can you have just a checkpoint? That doesn’t make any sense for all the reasons we’ve been discussing. So why are you in that situation and how can you fix it?

The problem I’m trying to solve is that I got a project whose developers left a long time ago. The project has a saved ‘Object detection’ model in Checkpoint format. As you said above, this is a limited format and it is inconvenient to work with. And so I came across the fact that even after restoring the model from Checkpoint and getting the correct inference of this model, I cannot nevertheless save this already restored model to a High level like this model.save()

An excellent fix to this problem would be that there is a workflow for saving a neuromodel restored from the Checkpoint format to a “full” format ))

I still don’t understand… once the restored model works and gives the correct inference…
it does not matter how it was created - but it works and works correctly - why, this already working model cannot be saved as model.save()
?

The way you reloaded the model from the checkpoint gives you a model capable of running in inference mode, but that is not all that you need of the 4 things listed in that image I showed earlier about what “save” does: maybe you are missing the optimizer and the loss functions. Those are only required in training mode. So maybe you could do the load of the checkpoint and then define an optimizer and a loss function and then try the save again. Here’s that image again:

Maybe there is something wrong with how you are calling the save function. E.g. there’s some other parameter you have to supply to cover the fact that your state is incomplete.

The fact is that I did not skip the steps when creating the model. I got 4 files, which are the model, saved as a “Checkpoint”.
My question was based on whether the recovery process from these 4 files of the “complete” model is possible, given that I know the architecture of the model (the name of the model from TensorFlow 2 detection model Zoo).

checkpoint model are like detection model which you can restore weight these checkpoint that will give use the part of the model you have chosen from a given model.