The use of tie_in() has been quite interesting but also perplexing. I checked on ChatGPT which explains it as trying to link to variables when calculating differentials as they change with each other. However in the bucket calculation it is not clear what tie_in() does here

### Step 3

```
rotated_vecs = np.concatenate([rotated_vecs, -rotated_vecs], axis=-1)
if verbose:
print("rotated_vecs.shape", rotated_vecs.shape)
### Step 4 ###
buckets = np.argmax(rotated_vecs, axis=-1).astype(np.int32)
if verbose:
print("buckets.shape", buckets.shape)
if verbose:
print("buckets", buckets)
if mask is not None:
n_buckets += 1 # Create an extra bucket for padding tokens only
buckets = np.where(mask[None, :], buckets, n_buckets - 1)
# buckets is now (n_hashes, seqlen). Next we add offsets so that
# bucket numbers from different hashing rounds don't overlap.
offsets = tie_in(buckets, np.arange(n_hashes, dtype=np.int32))**
offsets = np.reshape(offsets * n_buckets, (-1, 1))
### Step 5 ###
buckets = np.reshape(buckets + offsets, (-1))
if verbose:
print("buckets with offsets", buckets.shape, "\n", buckets)
### End Code Here
return buckets
```