Compatibility Issues with Lab 2?

Hello, There seems to be some compatibility issues with Lab2. When I run the code, it produces errors when I try importing:

“from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer
import torch
import time
import evaluate
import pandas as pd
import numpy as np”

I did not make any modifications to the code. Here is the error:

ModuleNotFoundError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/transformers/utils/ in _get_module(self, module_name)
1125 try:
→ 1126 return importlib.import_module(“.” + module_name,
1127 except Exception as e:

/opt/conda/lib/python3.7/importlib/ in import_module(name, package)
126 level += 1
→ 127 return _bootstrap._gcd_import(name[level:], package, level)

/opt/conda/lib/python3.7/importlib/ in _gcd_import(name, package, level)

/opt/conda/lib/python3.7/importlib/ in find_and_load(name, import)

/opt/conda/lib/python3.7/importlib/ in find_and_load_unlocked(name, import)

/opt/conda/lib/python3.7/importlib/ in _load_unlocked(spec)

/opt/conda/lib/python3.7/importlib/ in exec_module(self, module)

/opt/conda/lib/python3.7/importlib/ in _call_with_frames_removed(f, *args, **kwds)

/opt/conda/lib/python3.7/site-packages/transformers/ in
59 import torch
—> 60 import torch.distributed as dist

ModuleNotFoundError: No module named ‘torch.distributed’

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
1 from datasets import load_dataset
----> 2 from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer
3 import torch
4 import time
5 import evaluate

/opt/conda/lib/python3.7/importlib/ in handle_fromlist(module, fromlist, import, recursive)

/opt/conda/lib/python3.7/site-packages/transformers/utils/ in getattr(self, name)
1114 value = self._get_module(name)
1115 elif name in self._class_to_module.keys():
→ 1116 module = self._get_module(self._class_to_module[name])
1117 value = getattr(module, name)
1118 else:

/opt/conda/lib/python3.7/site-packages/transformers/utils/ in _get_module(self, module_name)
1129 f"Failed to import {}.{module_name} because of the following error (look up to see its"
1130 f" traceback):\n{e}"
→ 1131 ) from e
1133 def reduce(self):

RuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback):
No module named ‘torch.distributed’ "

Any suggestions how I can fix this?

Thank you.

1 Like

Hi Abtine, and welcome to the community! That is usually caused by an unsuccessful pip install. Please check that the previous cell before those imports ran successfully.

Before retrying, please visit the FAQ here, particularly item 10 which shows how to check the chosen instance type. Choosing a different one will often cause the kernel to crash.

Thank you!

1 Like

Thank you, Chris, for the quick response. I have attached a screenshot. The installs appear to be loading as in the video of the lab walkthrough. I rechecked everything but still get the same errors.

1 Like

Hi Abtine. That’s strange. Can you remove the --quiet flags from the pip install commands and see if the torch package installation fails. Also, make sure that you are not restarting the kernel.

Thank you, Chris. It appears that it is working now. The system forced me to clear the workspace and re-running it worked. I am not sure what the issue was the first time. Best.

1 Like