Sorry if this is “gilding the lily”, but I can’t help myself :
Note that all the TF/Keras “layer functions” return a function as their return value. That’s what Gent and Nobu mean when they say “instantiate the object”. First you define the layer function and then you call (invoke) it with the input tensor. That’s why you need two layers of parens there. So, yes, this is TF specific: it depends on the definition of the Layer class in TF/Keras.
They don’t give us that much explanation of the Sequential and Functional APIs in the course material. There is plenty of documentation on the TF site and here’s a really nice thread here on Discourse that gives a nice walkthrough,