Questions about floating point

Dear fellow mentors/classmates,

I checked the code in create_padding_mask, i found it intentionally to cast 0 as 0.0 using floating point 32 representation. Is this a best practice everytime we do arithmetic in tensorflow, we shall cast it by default using float32?

def create_padding_mask(decoder_token_ids):
β€œβ€"
Creates a matrix mask for the padding cells

Arguments:
    decoder_token_ids -- (n, m) matrix

Returns:
    mask -- (n, 1, m) binary tensor
"""    
seq = 1 - tf.cast(tf.math.equal(decoder_token_ids, 0), tf.float32)

# add extra dimensions to add the padding
# to the attention logits.
return seq[:, tf.newaxis, :]

If you want to use a zero in a real calculation, it’s wise to use a floating point representation.

If the zero represents a logical value, then it can be a bool.

1 Like