(I am assuming this also makes my exercise 8 test fails)
Thanks for responding. I am trying to figure out where I’m making a mistake. Perhaps one culprit is how I calculate example_weights in exercise 2. My lab id is tqxhipjp
I don’t touch the code that defines training.TrainTask and EvalTask.
I was told my example_weights was wrong so, after re-reading, I assigned it to
example_weights = np.ones_like(targets). If this is wrong, what should I be referring to?
I get the same error, but the TrainTask and EvalTask are created in a cell I’m not supposed to modify. (see below)
does that mean I should just ignore the failed test ?
my bad, I was using np.ones to generate the weights resulting in float.
once corrected to integer (as the data_generator test suggested), the error disappears.
I thus guess that the tl.CrossEntropyLoss auto-upgrades to WeightedCrossEntropyLoss when weights are available
I’m surprised, in my practice (ex. optimizing MonteCarlo simulations), weights were floats to get fine grained control.
I already tried that. But then we have a mismatch with the expected values printed in the Jupyter notebook for the output of train_model (but this does not throw an error, though)
Step 1: Total number of trainable weights: 2327042
Step 1: Ran 1 train steps in 1.97 secs
Step 1: train WeightedCategoryCrossEntropy | 0.69622028
Step 1: eval WeightedCategoryCrossEntropy | 0.69192076
Step 1: eval WeightedCategoryAccuracy | 0.50000000
Expected output (Approximately)
Step 1: Total number of trainable weights: 2327042
Step 1: Ran 1 train steps in 1.54 secs
Step 1: train CrossEntropyLoss | 0.70209986
Step 1: eval CrossEntropyLoss | 0.69093817
Step 1: eval Accuracy | 0.50000000
I also checked the unit testcases and found this in the exception block
It looks like the code in the unit testcase has been commented out for CrossEntropyLoss", "Accuracy. So we can pass the unit testcases by modifying the code in train_task and eval_task to the Expected values of [‘WeightedCategoryCrossEntropy’, ‘WeightedCategoryAccuracy’]. However, the expected values after running train_model are still looking for [‘CrossEntropyLoss’, ‘Accuracy’]
Seems like a mismatch all around between the expected values in the unit testcases and the expected values for the train_model output printed in the Jupyter notebook.
Thank you for reporting the issue. There were a lot of code changes in August and some bugs came out as a result. The staff will try to fix it as soon as possible.
Please send me your Jupyter Notebook Assignment code so I could check how you got the mismatched output.
I wanted to report that I got a notification saying that the notebook was changed and after using the new version and double-checking all the implementation code, I’m seeing some suspicious dimension mismatch. This is the error I’m getting when trying to train the model:
TypeError: mul got incompatible shapes for broadcasting: (32, 2), (16, 2).
There are a bunch of cells that we are not supposed to touch and after double-checking all possible values I can’t seem to really find the issue. Can that be an issue with this latest notebook update?