Getting "No module named 'torch._C'" error for Lab 1

Hey Everyone,

I am following the instructions for Lab 1, and haven’t made any changes to the code that is provided but getting an error saying that “No module named ‘torch._C’”. The error is occurring in Section 2 (Summarize Dialogue without Prompt Engineering) when I am trying to run the following code:

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

Does anyone know what it could be/ how to resolve it?

Attached is error log:

ModuleNotFoundError                       Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/transformers/utils/ in _get_module(self, module_name)
   1125         try:
-> 1126             return importlib.import_module("." + module_name, self.__name__)
   1127         except Exception as e:

/opt/conda/lib/python3.7/importlib/ in import_module(name, package)
    126             level += 1
--> 127     return _bootstrap._gcd_import(name[level:], package, level)

/opt/conda/lib/python3.7/importlib/ in _gcd_import(name, package, level)

/opt/conda/lib/python3.7/importlib/ in _find_and_load(name, import_)

/opt/conda/lib/python3.7/importlib/ in _find_and_load_unlocked(name, import_)

/opt/conda/lib/python3.7/importlib/ in _load_unlocked(spec)

/opt/conda/lib/python3.7/importlib/ in exec_module(self, module)

/opt/conda/lib/python3.7/importlib/ in _call_with_frames_removed(f, *args, **kwds)

/opt/conda/lib/python3.7/site-packages/transformers/models/t5/ in <module>
     24 import torch
---> 25 from torch import nn
     26 from torch.nn import CrossEntropyLoss

/opt/conda/lib/python3.7/site-packages/torch/nn/ in <module>
----> 1 from .modules import *  # noqa: F403
      2 from .parameter import (
      3     Parameter as Parameter,

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/ in <module>
----> 1 from .module import Module
      2 from .linear import Identity, Linear, Bilinear, LazyLinear
      3 from .conv import Conv1d, Conv2d, Conv3d, \

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/ in <module>
      7 import torch
----> 8 from ..parameter import Parameter
      9 import torch.utils.hooks as hooks

/opt/conda/lib/python3.7/site-packages/torch/nn/ in <module>
      1 import torch
----> 2 from torch._C import _disabled_torch_function_impl
      3 from collections import OrderedDict

ModuleNotFoundError: No module named 'torch._C'

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
<ipython-input-6-7ddda4075fb6> in <module>
      1 model_name='google/flan-t5-base'
----> 3 model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/ in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    468             )
    469         elif type(config) in cls._model_mapping.keys():
--> 470             model_class = _get_model_class(config, cls._model_mapping)
    471             return model_class.from_pretrained(
    472                 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/ in _get_model_class(config, model_mapping)
    359 def _get_model_class(config, model_mapping):
--> 360     supported_models = model_mapping[type(config)]
    361     if not isinstance(supported_models, (list, tuple)):
    362         return supported_models

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/ in __getitem__(self, key)
    600         if model_type in self._model_mapping:
    601             model_name = self._model_mapping[model_type]
--> 602             return self._load_attr_from_module(model_type, model_name)
    604         # Maybe there was several model types associated with this config.

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/ in _load_attr_from_module(self, model_type, attr)
    614         if module_name not in self._modules:
    615             self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 616         return getattribute_from_module(self._modules[module_name], attr)
    618     def keys(self):

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/ in getattribute_from_module(module, attr)
    559     if isinstance(attr, tuple):
    560         return tuple(getattribute_from_module(module, a) for a in attr)
--> 561     if hasattr(module, attr):
    562         return getattr(module, attr)
    563     # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the

/opt/conda/lib/python3.7/site-packages/transformers/utils/ in __getattr__(self, name)
   1114             value = self._get_module(name)
   1115         elif name in self._class_to_module.keys():
-> 1116             module = self._get_module(self._class_to_module[name])
   1117             value = getattr(module, name)
   1118         else:

/opt/conda/lib/python3.7/site-packages/transformers/utils/ in _get_module(self, module_name)
   1129                 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
   1130                 f" traceback):\n{e}"
-> 1131             ) from e
   1133     def __reduce__(self):

RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
No module named 'torch._C'

Would you try reloading and rerunning ths lab. Maybe it resolves the issue.

I have the same problem, tried re-running the lab and it still gives me the same error

Well maybe @esanina can be more helpful to you!

1 Like

@Abdelrahman_Osama3 could you please check that the correct kernel and instance type is chosen:
If the kernel is loaded correctly, please check that the pip installations and the module loads (first two cells) are working.