Generative AI with Large Language Models: Week 1 Assignment: Error loading the python module

Hi,

There is some issue with JupyterLab notebook provided for the Week-1 assignment. The below
import statement:
from transformers import AutoModelForSeq2SeqLM

fails with below error trace.

 
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[1], line 2
      1 from datasets import load_dataset
----> 2 from transformers import AutoModelForSeq2SeqLM
      3 from transformers import AutoTokenizer
      4 from transformers import GenerationConfig

File /opt/conda/lib/python3.12/site-packages/transformers/__init__.py:27
     24 from typing import TYPE_CHECKING
     26 # Check the dependencies satisfy the minimal versions required.
---> 27 from . import dependency_versions_check
     28 from .utils import (
     29     OptionalDependencyNotAvailable,
     30     _LazyModule,
   (...)
     49     logging,
     50 )
     51 from .utils.import_utils import define_import_structure

File /opt/conda/lib/python3.12/site-packages/transformers/dependency_versions_check.py:16
      1 # Copyright 2020 The HuggingFace Team. All rights reserved.
      2 #
      3 # Licensed under the Apache License, Version 2.0 (the "License");
   (...)
     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
     15 from .dependency_versions_table import deps
---> 16 from .utils.versions import require_version, require_version_core
     19 # define which module versions we always want to check at run time
     20 # (usually the ones defined in `install_requires` in setup.py)
     21 #
     22 # order specific notes:
     23 # - tqdm must be checked before tokenizers
     25 pkgs_to_check_at_runtime = [
     26     "python",
     27     "tqdm",
   (...)
     37     "pyyaml",
     38 ]

File /opt/conda/lib/python3.12/site-packages/transformers/utils/__init__.py:24
     21 from packaging import version
     23 from .. import __version__
---> 24 from .args_doc import (
     25     ClassAttrs,
     26     ClassDocstring,
     27     ImageProcessorArgs,
     28     ModelArgs,
     29     ModelOutputArgs,
     30     auto_class_docstring,
     31     auto_docstring,
     32     get_args_doc_from_source,
     33     parse_docstring,
     34     set_min_indent,
     35 )
     36 from .backbone_utils import BackboneConfigMixin, BackboneMixin
     37 from .chat_template_utils import DocstringParsingException, TypeHintParsingException, get_json_schema

File /opt/conda/lib/python3.12/site-packages/transformers/utils/args_doc.py:30
     22 import regex as re
     24 from .doc import (
     25     MODELS_TO_PIPELINE,
     26     PIPELINE_TASKS_TO_SAMPLE_DOCSTRINGS,
     27     PT_SAMPLE_DOCSTRINGS,
     28     _prepare_output_docstrings,
     29 )
---> 30 from .generic import ModelOutput
     33 PATH_TO_TRANSFORMERS = Path("src").resolve() / "transformers"
     36 AUTODOC_FILES = [
     37     "configuration_*.py",
     38     "modeling_*.py",
   (...)
     43     "feature_extractor_*.py",
     44 ]

File /opt/conda/lib/python3.12/site-packages/transformers/utils/generic.py:480
    476         return tuple(self[k] for k in self.keys())
    479 if is_torch_available():
--> 480     import torch.utils._pytree as _torch_pytree
    482     def _model_output_flatten(output: ModelOutput) -> tuple[list[Any], "_torch_pytree.Context"]:
    483         return list(output.values()), list(output.keys())

File /opt/conda/lib/python3.12/site-packages/torch/utils/__init__.py:8
      5 import weakref
      7 import torch
----> 8 from torch.utils import (
      9     backcompat as backcompat,
     10     collect_env as collect_env,
     11     data as data,
     12     deterministic as deterministic,
     13     hooks as hooks,
     14 )
     15 from torch.utils.backend_registration import (
     16     generate_methods_for_privateuse1_backend,
     17     rename_privateuse1_backend,
     18 )
     19 from torch.utils.cpp_backtrace import get_cpp_backtrace

File /opt/conda/lib/python3.12/site-packages/torch/utils/backcompat/__init__.py:2
      1 # mypy: allow-untyped-defs
----> 2 from torch._C import _set_backcompat_broadcast_warn
      3 from torch._C import _get_backcompat_broadcast_warn
      4 from torch._C import _set_backcompat_keepdim_warn

ModuleNotFoundError: No module named 'torch._C'

This is the pip install statement that exists:

%pip install -U 
    datasets==2.17.0, 
    transformers==4.38.2,
    evaluate==0.4.0,
    rouge_score==0.1.2,
    peft==0.3.0 --quiet"
   ]

seems the right module version of transformers isn’t mentioned.

1 Like

Did you try restating the Kernel (especially after PIP installs) and running cells below again. I think this error has been reported here before and the main reason of the issues was that.

1 Like

Thanks it works now. In my earlier invocations I did restart the kernel, but that hadn’t helped. This time even without kernel restart I’m no longer getting the earlier error.

thanks,
sateesh

2 Likes

How do i solve for this error? I have just started the lab and only have a limited amount of time to get done with, any help is sincerely appreciated.

1 Like

Dear @saiBeeraka,

Welcome to the Community!

Sufficient time has been allocated to complete the lab. However, if you’re unable to finish it within the given duration, you may try again after some time.

@Girijesh
I appreciate you responding back, but i am looking for a way to solve the same error that is mentioned in this thread, i have tried restarting the kernel but it is of no good.

1 Like

I did try restarting the kernel but i still can’t figure out what’s wrong with it?
It still shows an error.

1 Like

If I recall correctly for me also restart of the kernel didn’t help. I closed the session took a new one and used to get the same error. I left it at that and later after a few hours when I started a new session, I no longer faced this error.

reg,
sateesh

1 Like

Dear @saiBeeraka,

Can you please share the screenshot of the error?

1 Like

You’ll need to install PyTorch manually. If you’re using a CPU-only environment (like many Jupyter setups), run this in a notebook cell:

%pip install torch==2.1.0 --index-url https://download.pytorch.org/whl/cpu --quiet

Then reinstall the other packages to ensure compatibility:

%pip install -U datasets==2.17.0 transformers==4.38.2 evaluate==0.4.0 rouge_score==0.1.2 peft==0.3.0 --quiet

Restart your kernel after installing, and it should work fine.

1 Like

Thanks to everyone that reached out to help.
Grateful for the support

1 Like