Measure of closeness between two vectors

Greetings!

Improving Deep Neural Networks
Week 1
Flash cards

What is the idea behind diving by the sum of the Euclidean lengths?

Thank you.


1 Like

It seems 2 main reasons;

Normalization of the metric between 0 and 1 and the second this computation doesn’t only compare magnitudes but also direction contributions.

If you divide by the sum of the input vectors’ lengths, then for a pair of vectors a and b with the same ED the closeness will be inversely proportional to the sum of their length. IHMO this is odd (see an example below)…

X Y ED L ΣL Closeness
a1 1.00 1.00 1.41 1.41 4.24 0.33
b1 2.00 2.00 2.83
a2 2.00 2.00 1.41 2.83 7.07 0.20
b2 3.00 3.00 4.24
1 Like

It’s better to normalize each vector by its length first; otherwise, yes, it will be problematic as you write here!

Still do not understand the value of normalization.
In fact, the longer vectors are the less closeness is, even if the magnitude of the difference between both vectors is the same…

1 Like

Does this example from chatgpt helps in understanding:

1 Like

This is clear.
But, from the original flash card it follows:
Closeness = ED / (|a| + |b|)
Right?

1 Like

Perhaps, I am not sure of that content in detail now.