Results 1 to 1 of 1

Thread: Trying to install MiniGPT-4 locally->

  1. #1

    Thread Starter
    Frenzied Member
    Join Date
    Feb 2003
    Posts
    1,730

    Trying to install MiniGPT-4 locally->

    Hello,

    After installing Python 3.11.3 (64-bit) I downloaded the code for MiniGPT-4 at https://github.com/Vision-CAIR/MiniGPT-4.

    After that I ran the following commands to install all dependencies I am aware of:
    Code:
    C:\Users\...\AppData\Local\Programs\Python\Python311\python.exe -m pip install --upgrade pip
    pip install decord
    pip install gradio
    pip install iopath
    pip install numpy
    pip install omegaconf
    pip install opencv-python
    pip install sentencepiece
    pip install timm
    pip install torch
    pip install torchvision
    pip install transformers
    pip install webdataset
    When all that has been done I get the folllowing error when trying to launch MiniGPT-4 using "C:\Users\...\AppData\Local\Programs\Python\Python311\python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0":

    Code:
    Initializing Chat
    Loading VIT
    Loading VIT Done
    Loading Q-Former
    Loading Q-Former Done
    Loading LLAMA
    Traceback (most recent call last):
      File "D:\Other\MiniGPT-4-main\demo.py", line 60, in <module>
        model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\Other\MiniGPT-4-main\minigpt4\models\mini_gpt4.py", line 243, in from_config
        model = cls(
                ^^^^
      File "D:\Other\MiniGPT-4-main\minigpt4\models\mini_gpt4.py", line 86, in __init__
        self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model, use_fast=False)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\peter\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\tokenization_utils_base.py", line 1771, in from_pretrained
        resolved_vocab_files[file_id] = cached_file(
                                        ^^^^^^^^^^^^
      File "C:\Users\peter\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
        resolved_file = hf_hub_download(
                        ^^^^^^^^^^^^^^^^
      File "C:\Users\peter\AppData\Local\Programs\Python\Python311\Lib\site-packages\huggingface_hub\utils\_validators.py", line 112, in _inner_fn
        validate_repo_id(arg_value)
      File "C:\Users\peter\AppData\Local\Programs\Python\Python311\Lib\site-packages\huggingface_hub\utils\_validators.py", line 160, in validate_repo_id
        raise HFValidationError(
    huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/path/to/vicuna/weights/'. Use `repo_type` argument if needed.
    D:\Other\MiniGPT-4-main>
    Searching MiniGPT-4.pdf for the term "LLAMA" leads me to this paragraph:
    Code:
    [30] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
    and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.
    com/tatsu-lab/stanford_alpaca, 2023.
    Does anyone know whether this "stanford-alpaca" is what is missing or whether it is what "LLAMA" is based on?

    In short what dependency is missing and how do I install it?

    Thanks in advance. :-)
    Last edited by Peter Swinkels; May 22nd, 2023 at 11:50 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width