Hi, I’m a little confused on why it’s ok to add the expectation values of all the people in the world in the example given in the video when it seems clear that the probability of each person is not independent of the probabilities for all the other people. If you used the same technique for the birthday problem, you would never end up with the correct answer with factorials and I’m not 100% clear on the difference.
Hi @davidpet,
This is a great question. I will try to answer in the simplest form as possible. However, if you still do not get it, please let me know!
One thing nice about the sum of expectations is that it works even if the random variables are not independent of each other. This is why in the example it worked fine.
On the other hand, we must be careful when trying to copy this idea to the birthay problem. This is because in the birthday problem, we do not have a clearly defined random variable there and what we are trying to answer is the probability of at least two people having the same birthday and not the expected value of their birthday. The expected value is not a probability - it is a real number. It can be either negative, positive, or even infinite.
Was that clear? Please let me know!
Thanks,
Lucas
Yes I think I get it now. Although, couldn’t you frame it as something like each person has an expectation of n/365 people having the same birthday as them in the room, and the sum of all their expectations should be 1 if there are two people w/ matching birthdays? Are there some convenient rules of thumb or tests you could do to tell that won’t work when the one from the example video would?
Hey @davidpet,
A great question indeed. In fact, I am even stuck in understanding this video. Let me present my queries, and I am hoping that we will get a reply soon from one of the M4ML mentors.
My main point of confusion starts from time-stamp, 4:10, when Luis tries to present a more simpler approach for the 3 person case:
- First, I don’t understand the random variable
matches
introduced by Luis. He states that it counts the number of matches in this game. So, what is meant by this exactly? - Does that mean that it is equal to the number of individual matches, like:
No. of matches = No. of matches for A exclusively (only A gets his correct name, and no one else) + No. of matches for B exclusively + …
- Or it means something like at least one person should get his or her correct name? And if this, then how do we count the number of matches.
- Luis breaks down
matches
asA + B + C ...
, where he definesA
as the number of matches that only matter to person A. But in this case, won’t there be many overlaps? - Luis states that the probability that any person picks his name is
1/n
, and somehow he uses that asE[X_i]
. I don’t understand the relationship here between theE[X_i]
andP(X_i)
.
My Insights
- Earlier I thought that the bag containing the names is one with replacement of names, i.e., I thought that the bag always contains
n
strips withn
unique names. In other words, it simply replaces the strip that gets picked by the person. - In this case, it was more intuitive that the probability that any person would pick his/her name would be
1/n
. - However, the non-intuitive thing (in my opinion) was that even when the bag is without replacement of names, the probability remains the same, i.e.,
1/n
.
Cheers,
Elemento
Hey @lucas.coutinho and @AeryB,
Can you please guys look into our queries once, and help us out?
Cheers,
Elemento
Unfortunately, neither the above explanations nor the approach demonstrated in the video make sense for me. Feels like something is still missing. So I have figured out another way to get the same result.
Let’s count the expected value as usual. This will be the sum of the number of matches in each outcome divided by the number of all possible outcomes.
Let’s consider n people: {a1, a2, … an}. The number of outcomes is the amount of permutations or n!. Let’s use n=3 to illustrate this here and further:
{a1, a2, a3}
{a1, a3, a2}
{a2, a1, a3}
{a2, a3, a1}
{a3, a1, a2}
{a3, a2, a1}
And how should we count the sum? Let’s calculate this as the sum of the number of occurrences of each specific i-th match. Perhaps the picture below will make a bit more sense, we’ll write down all the outcomes, where the i-th position will contain 1 if we have a match or 0 otherwise:
{1, 1, 1}
{1, 0, 0}
{0, 0, 1}
{0, 0, 0}
{0, 0, 0}
{0, 1, 0}
Let’s start with the 1st position. We’ll split the set of outcomes into n groups, where in one group there will be a match in the 1st position, and in all others there will be any other possible option instead:
Group 1: {a1, a2, a3}, {a1, a3, a2}
Group 2: {a2, a1, a3}, {a2, a3, a1}
Group 3: {a3, a1, a2}, {a3, a2, a1}
Each group contains the same number of elements, which is equal to (n-1)! or n!/n. The groups intersection is empty. Thus, the match in the 1st position occurs in n!/n cases.
Next, for every i-th position we’ll do the same job and then simply sum up our partial sums: n * n!/n = n!. And divide this by n! gives 1.