Next, you will create an array of tags from your cleaned dataset. Oftentimes your input sequence will exceed the maximum length of a sequence your network can process. In this case, your sequence will be cut off, and you need to append zeroes onto the end of the shortened sequences using this [Keras padding API](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences).
In the NER lab, is this part not well worded?
In this case, your sequence will be cut off, and you need to append zeroes onto the end of the shortened sequences
When a sequence is cut off, no need to append zeroes. It needs to append zeroes only when the sequence length is shorter than the MAX_LENGTH. right?
I haven’t looked at this in depth, but from what my intuition is telling me is that what they meant is, when your initial sequence exceeds the maximum length, it’ll be cut down. You’ll have two (or more sequences depending on the initial length) sequences now. One, which is exactly the maximum length, the other, which is shorter than the maximum length., so you’ll append zeros at the end of that shorter sequence.