MaskRCNN with Tensorflow 2.0

Hello,

I am trying to use MaskRCNN with Tensorflow 2.0.
Originally they have use a Lambda layer to create a variable in Tensorflow 1.x:

anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)

However, this is not feasible with Tensorflow 2.0 so I found a workaround to create a subclass of Keras layer:

 class AnchorsLayer(KL.Layer):
            
            def __init__(self, anchors, name="anchors", **kwargs):
                super(AnchorsLayer, self).__init__(name=name, **kwargs)
                self.anchors = K.Variable(anchors)
        
            def call(self, dummy):
                return self.anchors
        
            def get_config(self):
                config = super(AnchorsLayer, self).get_config()
                return config

 anchors = AnchorsLayer(anchors, name="anchors")(input_image)

I can run the code, but I noticed that when I do model.summary() the original layer has 0 parameters while the new layer has a lot of parameters. So my question is will this affect the model architecture and performance? Thank you

Hypothetically speaking, if the layer has parameters some of those parameters should be trainable which ones you have to check. If the original model had 0 parameters it meant wasnt trainable, and the new layer is trainable then it must have an effect on weights (on trainable parameters), in terms of model architecture i.e. layers and way of structuring them it should not change, unless you dont explicitly change it yourself.