Help with model crashing notebook

Hi there! I’ve recently finished the ML specialization on Coursera and was having a go with a small project related to magic the gathering to consolidate the knowledge. However, when I try to train my model, the kernel crashes and I’ve tried many (old) solutions on stack overflow with no avail.

For context, I’m basing my project on this notebook on Kaggle using this dataset, but what I want to do is to have a multi-label model where I give a number of features and I get the percentage of that card’s color identity.

I can’t get to train the model and seems like it’d be a memory issue, but I:

  • Have increased the memory buffer
  • Updated numpy and tensorflow
  • Added the KMP_DUPLICATE_LIB_OK flag since I’m trying on Docker on an apple silicon macbook
  • Added a batch_size for my input and also the model

So I’d like to ask for help on what I could’ve been missing on the setup or what I can change in order to get it to train.

Thanks!

Update: By running on my windows machine (that has a nvidia GPU) works fine, although I get other errors, like the output not matching even though it has the same shape.

Updated notebook #2:
9aa925bc16633a01e8a2687aa105dffe042f3c47.ipynb (180.0 KB)

Hi Myrium!

I’m also working on a project using open source models. Instead of downloading the models to my computer, I decided to use a model available in the cloud by making calls to HuggingFace’s API inference.

Here is the information about the Inference API:

I know this a different approach, but it might be useful for you since it is not limited by the hardware constraints of your laptop. That said, the limitation here is associated with the number of tokens. However, I haven’t reached the limit yet, but I’ve only made a few API calls so far.

1 Like

Thanks for that! I was able to make it work with some chatgpt help to debug the code, although I admit I’m not fully sure how to validate the model.

PS: I’ll upload an updated version

1 Like