Information gain in practice lab of fourth week



hello guys,
I have a problem in calculating information gain . The problem is that
my calculated information gain and expected information gain are the same.
but as you see in the picture above , it is wrong.
if you can , please advice me and help me to find my mistake

No, I don’t think that’s the problem. There is an error in your compute_entropy() code.

  • If len(y) is zero, then you get an error if you divide by zero.
  • So you have to test len(y) first.
  • The function must return 0 if len(y) is zero.

Here’s the key from the instructions:

thanks first of all for your attention and advice.
I have solved the the problem by your help.
but unfortunately, now I have faced to another problem.
con you give attention , it is given below picture

It appears your code doesn’t compute the correct information gain when checked using the tests in the compute_information_gain_test() function.

You can find this function via the File → Open menu, then open the “public_tests.py” file. There you can see what data the tests are using, and perhaps understand where your code is malfunctioning.

I had the same problem, check compute_entropy code and put a control for m == 0, very simple with if
you can also put an error handler for example:

try:
    # your code
expect ZeroDivisionError:
    pass # or your code 

I hope it works for you

I suggest, not to include any unnecessary statements in the code.