Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

求救!!Loading /root/AI/models/Embedding_Tools/glm-4-9b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error. #140

Open
cenzijing opened this issue Jan 3, 2025 · 3 comments

Comments

@cenzijing
Copy link

我是笨蛋!不知道出了什么问题!!求救!!

ComfyUI Error Report

Error Details

  • Node ID: 82
  • Node Type: advance_ebd_tool
  • Exception Type: ValueError
  • Exception Message: Loading /root/AI/models/Embedding_Tools/glm-4-9b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.

Stack Trace

  File "/root/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/root/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/root/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/root/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "/root/ComfyUI/custom_nodes/comfyui_LLM_party/custom_tool/ebd_tool.py", line 60, in file
    bge_embeddings = HuggingFaceBgeEmbeddings(

  File "/root/miniconda3/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 216, in warn_if_direct_instance
    return wrapped(self, *args, **kwargs)

  File "/root/miniconda3/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 344, in __init__
    self.client = sentence_transformers.SentenceTransformer(

  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 320, in __init__
    modules = self._load_auto_model(

  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 1528, in _load_auto_model
    transformer_model = Transformer(

  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 77, in __init__
    config = self._load_config(model_name_or_path, cache_dir, backend, config_args)

  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 128, in _load_config
    return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir)

  File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1020, in from_pretrained
    trust_remote_code = resolve_trust_remote_code(

  File "/root/miniconda3/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 678, in resolve_trust_remote_code
    raise ValueError(

System Information

  • ComfyUI Version: v0.3.10-10-ga618f768
  • Arguments: main.py --disable-metadata --listen --port 80
  • OS: posix
  • Python Version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
  • Embedded Python: false
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4090 D : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25386352640
    • VRAM Free: 25015746560
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2025-01-03T21:03:42.130145 - [START] Security scan2025-01-03T21:03:42.130156 - 
2025-01-03T21:03:46.688109 - [DONE] Security scan2025-01-03T21:03:46.688139 - 
2025-01-03T21:03:46.765975 - ## ComfyUI-Manager: installing dependencies done.2025-01-03T21:03:46.766045 - 
2025-01-03T21:03:46.766087 - ** ComfyUI startup time:2025-01-03T21:03:46.766125 -  2025-01-03T21:03:46.766169 - 2025-01-03 21:03:46.7660552025-01-03T21:03:46.766214 - 
2025-01-03T21:03:46.766264 - ** Platform:2025-01-03T21:03:46.766306 -  2025-01-03T21:03:46.766341 - Linux2025-01-03T21:03:46.766376 - 
2025-01-03T21:03:46.766413 - ** Python version:2025-01-03T21:03:46.766448 -  2025-01-03T21:03:46.766484 - 3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0]2025-01-03T21:03:46.766520 - 
2025-01-03T21:03:46.766557 - ** Python executable:2025-01-03T21:03:46.766592 -  2025-01-03T21:03:46.766628 - /root/miniconda3/bin/python2025-01-03T21:03:46.766666 - 
2025-01-03T21:03:46.766705 - ** ComfyUI Path:2025-01-03T21:03:46.766741 -  2025-01-03T21:03:46.766778 - /root/ComfyUI2025-01-03T21:03:46.766815 - 
2025-01-03T21:03:46.766873 - ** Log path:2025-01-03T21:03:46.766912 -  2025-01-03T21:03:46.766958 - /root/ComfyUI/comfyui.log2025-01-03T21:03:46.766996 - 
2025-01-03T21:03:48.606983 - 
Prestartup times for custom nodes:
2025-01-03T21:03:48.607161 -    0.0 seconds: /root/ComfyUI/custom_nodes/rgthree-comfy
2025-01-03T21:03:48.607265 -    6.5 seconds: /root/ComfyUI/custom_nodes/ComfyUI-Manager
2025-01-03T21:03:48.607321 - 
2025-01-03T21:03:50.563688 - Total VRAM 24210 MB, total RAM 515799 MB
2025-01-03T21:03:50.563886 - pytorch version: 2.5.1+cu124
2025-01-03T21:03:50.564201 - Set vram state to: NORMAL_VRAM
2025-01-03T21:03:50.564504 - Device: cuda:0 NVIDIA GeForce RTX 4090 D : cudaMallocAsync
2025-01-03T21:03:51.687839 - Using pytorch attention
2025-01-03T21:03:52.890757 - [Prompt Server] web root: /root/ComfyUI/web
2025-01-03T21:03:54.824871 - HTTP Request: GET https://api.deepbricks.ai/v1/models "HTTP/1.1 200 OK"
2025-01-03T21:03:57.733626 - Optional node movie_editor import failed with error: No module named 'moviepy.editor'.If you don't need to use this optional node, this reminder can be ignored.2025-01-03T21:03:57.733689 - 
2025-01-03T21:03:57.764667 - /root/miniconda3/lib/python3.10/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
  warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
2025-01-03T21:03:58.807606 - Optional node browser import failed with error: No module named 'browser_use'.If you don't need to use this optional node, this reminder can be ignored.2025-01-03T21:03:58.807664 - 
2025-01-03T21:03:58.819765 - libportaudio2 is already installed.2025-01-03T21:03:58.819808 - 
2025-01-03T21:04:00.089819 - llama-cpp installed2025-01-03T21:04:00.089874 - 
2025-01-03T21:04:03.742652 - Successfully installed py-cord[voice]2025-01-03T21:04:03.742780 - 
2025-01-03T21:04:03.771079 - Failed to install Playwright browsers: /root/miniconda3/bin/python: No module named playwright
2025-01-03T21:04:03.771138 - 
2025-01-03T21:04:03.782252 - 
2025-01-03T21:04:03.782305 - �[92m[rgthree-comfy] Loaded 42 extraordinary nodes. 🎉�[00m2025-01-03T21:04:03.782342 - 
2025-01-03T21:04:03.782373 - 
2025-01-03T21:04:04.068893 - ------------------------------------------2025-01-03T21:04:04.068952 - 
2025-01-03T21:04:04.068983 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2025-01-03T21:04:04.069012 - 
2025-01-03T21:04:04.069043 - ------------------------------------------2025-01-03T21:04:04.069074 - 
2025-01-03T21:04:04.069130 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-01-03T21:04:04.069161 - 
2025-01-03T21:04:04.069190 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-01-03T21:04:04.069218 - 
2025-01-03T21:04:04.069246 - ------------------------------------------2025-01-03T21:04:04.069275 - 
2025-01-03T21:04:04.072936 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: /root/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts�[0m
2025-01-03T21:04:04.073031 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False�[0m
2025-01-03T21:04:04.073114 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']�[0m
2025-01-03T21:04:04.097324 - �[1;35m### [START] ComfyUI AlekPet Nodes �[1;34mv1.0.35�[0m�[1;35m ###�[0m2025-01-03T21:04:04.097369 - 
2025-01-03T21:04:04.902976 - �[92mNode -> ChatGLMNode: �[93mChatGLM4TranslateCLIPTextEncodeNode, ChatGLM4TranslateTextNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.903639 - 
2025-01-03T21:04:04.914076 - �[92mNode -> DeepTranslatorNode: �[93mDeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.914318 - 
2025-01-03T21:04:04.917957 - �[92mNode -> GoogleTranslateNode: �[93mGoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.918221 - 
2025-01-03T21:04:04.925088 - �[92mNode -> ArgosTranslateNode: �[93mArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.925648 - 
2025-01-03T21:04:04.927364 - �[92mNode -> PoseNode: �[93mPoseNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.927474 - 
2025-01-03T21:04:04.933357 - �[92mNode -> ExtrasNode: �[93mPreviewTextNode, HexToHueNode, ColorsCorrectNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.933487 - 
2025-01-03T21:04:04.965299 - �[92mNode -> IDENode: �[93mIDENode�[0m �[92m[Loading] �[0m2025-01-03T21:04:04.965367 - 
2025-01-03T21:04:05.098758 - �[92mNode -> PainterNode: �[93mPainterNode�[0m �[92m[Loading] �[0m2025-01-03T21:04:05.098814 - 
2025-01-03T21:04:05.099112 - �[1;35m### [END] ComfyUI AlekPet Nodes ###�[0m2025-01-03T21:04:05.099215 - 
2025-01-03T21:04:05.232005 - ### Loading: ComfyUI-Manager (V2.55.5)2025-01-03T21:04:05.232057 - 
2025-01-03T21:04:05.288392 - ### ComfyUI Version: v0.3.10-10-ga618f768 | Released on '2024-12-29'2025-01-03T21:04:05.288435 - 
2025-01-03T21:04:05.293744 - 
Import times for custom nodes:
2025-01-03T21:04:05.293960 -    0.0 seconds: /root/ComfyUI/custom_nodes/websocket_image_save.py
2025-01-03T21:04:05.294423 -    0.0 seconds: /root/ComfyUI/custom_nodes/AIGODLIKE-COMFYUI-TRANSLATION
2025-01-03T21:04:05.294697 -    0.0 seconds: /root/ComfyUI/custom_nodes/rgthree-comfy
2025-01-03T21:04:05.295886 -    0.0 seconds: /root/ComfyUI/custom_nodes/ComfyUI_essentials
2025-01-03T21:04:05.296180 -    0.0 seconds: /root/ComfyUI/custom_nodes/comfyui_controlnet_aux
2025-01-03T21:04:05.296284 -    0.1 seconds: /root/ComfyUI/custom_nodes/ComfyUI-Manager
2025-01-03T21:04:05.296632 -    0.3 seconds: /root/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
2025-01-03T21:04:05.296747 -    1.1 seconds: /root/ComfyUI/custom_nodes/ComfyUI_Custom_Nodes_AlekPet
2025-01-03T21:04:05.296920 -   10.6 seconds: /root/ComfyUI/custom_nodes/comfyui_LLM_party
2025-01-03T21:04:05.297013 - 
2025-01-03T21:04:05.312203 - Starting server

2025-01-03T21:04:05.312710 - To see the GUI go to: http://0.0.0.0:80
2025-01-03T21:04:05.312922 - To see the GUI go to: http://[::]:80
2025-01-03T21:04:05.515241 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2025-01-03T21:04:05.515326 - 
2025-01-03T21:04:05.575429 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2025-01-03T21:04:05.575483 - 
2025-01-03T21:04:05.619052 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2025-01-03T21:04:05.619104 - 
2025-01-03T21:04:05.794791 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-01-03T21:04:05.794853 - 
2025-01-03T21:04:05.824590 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2025-01-03T21:04:05.824630 - 
2025-01-03T21:04:53.697777 - FETCH DATA from: /root/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json2025-01-03T21:04:53.697814 - 2025-01-03T21:04:53.704761 -  [DONE]2025-01-03T21:04:53.704800 - 
2025-01-03T21:04:53.869308 - [ERROR] An error occurred while retrieving information for the 'CR Select Font' node.
2025-01-03T21:04:53.870734 - Traceback (most recent call last):
  File "/root/ComfyUI/server.py", line 581, in get_object_info
    out[x] = node_info(x)
  File "/root/ComfyUI/server.py", line 548, in node_info
    info['input'] = obj_class.INPUT_TYPES()
  File "/root/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes/nodes/nodes_graphics_text.py", line 467, in INPUT_TYPES
    file_list = [f for f in os.listdir(font_dir) if os.path.isfile(os.path.join(font_dir, f)) and f.lower().endswith(".ttf")]
FileNotFoundError: [Errno 2] No such file or directory: '/usr/share/fonts/truetype'

2025-01-03T21:08:09.354429 - got prompt
2025-01-03T21:08:09.360658 - Failed to validate prompt for output 91:
2025-01-03T21:08:09.360834 - * load_file 84:
2025-01-03T21:08:09.360963 -   - Value not in list: relative_path: 'README_ZH.txt' not in ['test.xls', 'test.csv', 'test.txt', 'README_ZH.md', 'party_iisue.md', 'README.md', 'how_to_use_nodes_ZH.txt', 'story.json', 'test.xlsx', '量子永生教.json', '(NEW)工作、消费主义和新穷人[波兰]齐格蒙特·鲍曼(2).txt']
2025-01-03T21:08:09.361138 - Output will be ignored
2025-01-03T21:08:09.361289 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-01-03T21:08:15.901498 - got prompt
2025-01-03T21:08:16.462758 - /root/ComfyUI/custom_nodes/comfyui_LLM_party/custom_tool/ebd_tool.py:60: LangChainDeprecationWarning: The class `HuggingFaceBgeEmbeddings` was deprecated in LangChain 0.2.2 and will be removed in 1.0. An updated version of the class exists in the :class:`~langchain-huggingface package and should be used instead. To use it run `pip install -U :class:`~langchain-huggingface` and import as `from :class:`~langchain_huggingface import HuggingFaceEmbeddings``.
  bge_embeddings = HuggingFaceBgeEmbeddings(
2025-01-03T21:08:16.510222 - PyTorch version 2.5.1+cu124 available.
2025-01-03T21:08:16.511942 - JAX version 0.4.35 available.
2025-01-03T21:08:16.640281 - Load pretrained SentenceTransformer: /hy-tmp/AI_files/models/Embedding_Tools/ChatGLM3
2025-01-03T21:08:16.640501 - !!! Exception during processing !!! Path /hy-tmp/AI_files/models/Embedding_Tools/ChatGLM3 not found
2025-01-03T21:08:16.642590 - Traceback (most recent call last):
  File "/root/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/root/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/root/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/root/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/root/ComfyUI/custom_nodes/comfyui_LLM_party/custom_tool/ebd_tool.py", line 60, in file
    bge_embeddings = HuggingFaceBgeEmbeddings(
  File "/root/miniconda3/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 216, in warn_if_direct_instance
    return wrapped(self, *args, **kwargs)
  File "/root/miniconda3/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 344, in __init__
    self.client = sentence_transformers.SentenceTransformer(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 295, in __init__
    raise ValueError(f"Path {model_name_or_path} not found")
ValueError: Path /hy-tmp/AI_files/models/Embedding_Tools/ChatGLM3 not found

2025-01-03T21:08:16.643105 - Prompt executed in 0.73 seconds
2025-01-03T21:09:36.637617 - got prompt
2025-01-03T21:09:36.653237 - Load pretrained SentenceTransformer: /root/AI/models/Embedding_Tools/glm-4-9b-chat
2025-01-03T21:09:36.653595 - No sentence-transformers model found with name /root/AI/models/Embedding_Tools/glm-4-9b-chat. Creating a new one with mean pooling.
2025-01-03T21:09:36.654707 - !!! Exception during processing !!! Loading /root/AI/models/Embedding_Tools/glm-4-9b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
2025-01-03T21:09:36.656226 - Traceback (most recent call last):
  File "/root/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/root/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/root/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/root/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/root/ComfyUI/custom_nodes/comfyui_LLM_party/custom_tool/ebd_tool.py", line 60, in file
    bge_embeddings = HuggingFaceBgeEmbeddings(
  File "/root/miniconda3/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 216, in warn_if_direct_instance
    return wrapped(self, *args, **kwargs)
  File "/root/miniconda3/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 344, in __init__
    self.client = sentence_transformers.SentenceTransformer(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 320, in __init__
    modules = self._load_auto_model(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 1528, in _load_auto_model
    transformer_model = Transformer(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 77, in __init__
    config = self._load_config(model_name_or_path, cache_dir, backend, config_args)
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 128, in _load_config
    return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir)
  File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1020, in from_pretrained
    trust_remote_code = resolve_trust_remote_code(
  File "/root/miniconda3/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 678, in resolve_trust_remote_code
    raise ValueError(
ValueError: Loading /root/AI/models/Embedding_Tools/glm-4-9b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.

2025-01-03T21:09:36.657078 - Prompt executed in 0.01 seconds
2025-01-03T21:11:12.652810 - got prompt
2025-01-03T21:11:12.665307 - Load pretrained SentenceTransformer: /root/AI/models/Embedding_Tools/glm-4-9b-chat
2025-01-03T21:11:12.665523 - No sentence-transformers model found with name /root/AI/models/Embedding_Tools/glm-4-9b-chat. Creating a new one with mean pooling.
2025-01-03T21:11:12.665921 - !!! Exception during processing !!! Loading /root/AI/models/Embedding_Tools/glm-4-9b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
2025-01-03T21:11:12.666263 - Traceback (most recent call last):
  File "/root/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/root/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/root/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/root/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/root/ComfyUI/custom_nodes/comfyui_LLM_party/custom_tool/ebd_tool.py", line 60, in file
    bge_embeddings = HuggingFaceBgeEmbeddings(
  File "/root/miniconda3/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 216, in warn_if_direct_instance
    return wrapped(self, *args, **kwargs)
  File "/root/miniconda3/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 344, in __init__
    self.client = sentence_transformers.SentenceTransformer(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 320, in __init__
    modules = self._load_auto_model(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 1528, in _load_auto_model
    transformer_model = Transformer(
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 77, in __init__
    config = self._load_config(model_name_or_path, cache_dir, backend, config_args)
  File "/root/miniconda3/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 128, in _load_config
    return AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir)
  File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1020, in from_pretrained
    trust_remote_code = resolve_trust_remote_code(
  File "/root/miniconda3/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 678, in resolve_trust_remote_code
    raise ValueError(
ValueError: Loading /root/AI/models/Embedding_Tools/glm-4-9b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.

2025-01-03T21:11:12.666717 - Prompt executed in 0.01 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":98,"last_link_id":117,"nodes":[{"id":76,"type":"load_file","pos":[-30,300],"size":[320,140],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"file_content","type":"STRING","links":[105],"slot_index":0,"shape":3,"label":"file_content"}],"properties":{"Node name for S&R":"load_file"},"widgets_values":["","test.txt",true],"shape":1},{"id":87,"type":"tool_combine","pos":[670,560],"size":[310,320],"flags":{},"order":7,"mode":0,"inputs":[{"name":"tool1","type":"STRING","link":112,"widget":{"name":"tool1"},"label":"tool1"},{"name":"tool2","type":"STRING","link":113,"widget":{"name":"tool2"},"label":"tool2"},{"name":"tool3","type":"STRING","link":null,"widget":{"name":"tool3"},"label":"tool3"}],"outputs":[{"name":"tools","type":"STRING","links":[116],"slot_index":0,"shape":3,"label":"tools"}],"properties":{"Node name for S&R":"tool_combine"},"widgets_values":["","","",true],"shape":1},{"id":89,"type":"LLM","pos":[1030,300],"size":[440,590],"flags":{},"order":8,"mode":0,"inputs":[{"name":"model","type":"CUSTOM","link":115,"label":"model"},{"name":"images","type":"IMAGE","link":null,"label":"images","shape":7},{"name":"extra_parameters","type":"DICT","link":null,"label":"extra_parameters","shape":7},{"name":"system_prompt_input","type":"STRING","link":null,"widget":{"name":"system_prompt_input"},"label":"system_prompt_input"},{"name":"user_prompt_input","type":"STRING","link":null,"widget":{"name":"user_prompt_input"},"label":"user_prompt_input"},{"name":"tools","type":"STRING","link":116,"widget":{"name":"tools"},"label":"tools"},{"name":"file_content","type":"STRING","link":null,"widget":{"name":"file_content"},"label":"file_content"}],"outputs":[{"name":"assistant_response","type":"STRING","links":[117],"slot_index":0,"shape":3,"label":"assistant_response"},{"name":"history","type":"STRING","links":null,"shape":3,"label":"history"},{"name":"tool","type":"STRING","links":null,"shape":3,"label":"tool"},{"name":"image","type":"IMAGE","links":null,"shape":3,"label":"image"}],"properties":{"Node name for S&R":"LLM"},"widgets_values":["你一个强大的人工智能助手。","你好,readme中,comfyui llm party是个什么项目",0.7,"disable","enable","disable","enable",1920,"","","","","",100,"",true,true,true,[false,true]],"shape":1},{"id":90,"type":"LLM_api_loader","pos":[670,300],"size":[320,130],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"model","type":"CUSTOM","links":[115],"slot_index":0,"shape":3,"label":"model"}],"properties":{"Node name for S&R":"LLM_api_loader"},"widgets_values":["qwen2:latest","","",true],"shape":1},{"id":91,"type":"show_text_party","pos":[1500,310],"size":[380,580],"flags":{},"order":9,"mode":0,"inputs":[{"name":"text","type":"STRING","link":117,"widget":{"name":"text"},"label":"text"}],"outputs":[{"name":"STRING","type":"STRING","links":null,"shape":6,"label":"STRING"}],"properties":{"Node name for S&R":"show_text_party"},"widgets_values":["","你好!首先,你需要了解`comfyui llm party`是一个用于在WebUI平台集成LLM(大型语言模型)系统的项目。通过配置与自定义设置进行优化,它旨在让开发者快速上手,同时提高现有社区对生成文本和对话过程的参与度。\n\n该项目的功能和应用可以包括但不限于:\n1. **文本生成**:基于输入提示或提供的上下文生成与之相关的文本内容。\n2. **问答系统**:提供与复杂或专业主题相关问题的自动解答机制,为用户、开发者以及广大社区成员节省时间资源。\n3. **代码建议/修复**:在程序员开发或者调试代码的时候进行协助,可能包括自动识别语法错误、补全语句,或者重构建议等。\n4. **跨领域知识应用辅助**:不论背景、兴趣还是特定语言的知识点方面为用户提供指导和支持。\n\n`llm party`作为部分(通常指的是多个单独的LLM系统的集合)或在comfyui中的一个具体模块,则旨在通过各种方式融入现有工作流程,提供高效、智能化的数据生成和咨询模式。这不仅能够提升团队效率,还极大地丰富了跨平台的交互体验和服务性能。\n\n如果您需要深度了解项目的详细配置与调用说明,请访问原始项目文档或者开发者社区(例如GitLab, GitHub等专业网站),这些地方通常有项目主页以及相关指南和讨论区,帮助您解答操作细节问题。如果语言沟通出现困难,请尝试在同样的语境或在相关社区平台上提出您的问题。\n\n```markdown\n如果您是英文环境中使用中文提问:\n\"Hello, I need to understand what 'README.md' specifically for the project 'comfyui llm party' refers to?\"\n```\n\n### 另一个请求范例:\n\n如果有需要获取特定用户疑问的详细文件信息,可以尝试使用 `data_base_advance` 选项。例如:\n\n输入示例:\n{ \"type\": \"object\", \n  \"properties\": {\n    \"question\": { \"type\": \"string\",\"description\": `要查询的问题,“comfyui llm party”在什么背景下”,默认使用英文作为输出。}, `\"file_name\"`, `'README.md}' },\n  }\n\n需要获取与用户提问相关的、详细描述 `comfyuidocumentaryllmnt party` 文件的信息,这里可能会返回关于其背景、意图或者功能实现的相关细节概述。\n\n为了更好地向你说明:  \n- 使用问题:\"查阅或分析用户上传的问题描述文档来寻找相关于 “comfyui llmntpart”的重要信息。”\n\n请调用 `data_base_advance api`.\n\n例如:\n```\n{\"type\": \"object\",  \n  \"Properties\": {\"Question\":\"What's purpose 'comFYi Ullmntpart'? Context and implications\",\"file_name\"=README.md,\"k\":3} \n}\n```"],"shape":1},{"id":92,"type":"Note","pos":[-390,260],"size":[340,640],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["#工作流介绍:\n通过将本地文档加载到 ComfyUI 中并结合词嵌入向量模型进行处理,LLM 能够更好地利用这些外部知识库来提供更加精确和丰富的回答。这种方式不仅扩展了 LLM 的知识覆盖面,还提升了其在处理专业化、定制化任务中的能力,特别是在安全性和保密性要求较高的场景中具有明显的优势。\n\n在实际应用中,虽然大多数大型语言模型(LLM)在训练时已经接触了大量的数据集,但由于世界上的数据量庞大且多变,LLM 并不能覆盖所有可能的知识点和背景信息。因此,在某些特定场景下,仅依靠 LLM 的内置知识库可能无法得到满意的回答。此时,结合多知识库的调用将大大提升 LLM 的回答质量,特别是在处理需要专业背景知识的任务时。\n\n在 ComfyUI 中,我们可以通过使用【加载文件】节点,将本地的文档文件加载到系统中,作为知识库或背景知识的一部分。这些文档可能包括技术文档、行业报告、公司内部的指南或任何与问题相关的参考资料。\n\n-----------------------------------------------------\n#应用场景:\n例如,在一个企业环境中,员工希望利用 LLM 来回答有关公司内部政策的问题。虽然 LLM 本身可能已经具备一些通用的政策知识,但具体到某个企业内部的政策细节,LLM 可能无法提供准确的回答。通过将企业的内部政策文件加载到 ComfyUI 中,作为知识库的一部分,LLM 就能够利用这些本地文件中包含的详细信息来生成更精准的回答。\n这种多知识库调用的方式不仅增强了 LLM 的实用性,还提供了更多的灵活性,使其能够根据实际需求动态加载和利用不同的数据源。\n\n-----------------------------------------------------\n#词嵌入向量模型的作用:\n在调用本地文档时,词嵌入向量模型扮演着至关重要的角色。当【加载文件】节点加载了本地的文档后,系统会使用高级词嵌入工具对文档内容进行向量化处理。词嵌入模型能够将文本数据转化为多维向量,这些向量能够表示文本的语义信息和上下文关系。\n\n通过这种向量化的处理,LLM 可以更好地理解和利用本地文档中的内容。词嵌入向量模型可以帮助 LLM:\n\n1. 语义匹配:通过计算问题与文档内容在向量空间上的余弦相似度,LLM 能够更准确地检索出与问题相关的内容,从而生成更加精准的回答。\n\n2. 语境理解:匹配的文本以最接近多个的文字段落的形式返回,LLM 在调用这些文字段落时,能够更好地理解文本的上下文,从而避免因为语境不清导致的回答错误。\n\n3. 处理大规模数据:面对多个知识库时,LLM可以选择性的调用其中一个知识库,提高搜索的精确性和高效性,可以将不同知识库的相关信息放在系统提示词中,用户只需要正常交互,LLM即可自动匹配到最相关的知识库并查询。\n\n------------------------------------------------------\n#写在最后:\n- LLM_Party正在用心经营一片AI时代的后花园,我们希望能够在AI时代下成为众多参与者的一员,我们从开源社区中走来,也希望回到社区中去。\n- 欢迎大家来到我们用心经营的后花园:\n- 项目地址:https://github.com/heshengtao/comfyui_LLM_party\n\n- openart:https://openart.ai/workflows/profile/comfyui_llm_party?tab=workflows&sort=latest\n\n- LibLib:https://www.liblib.art/userpage/4378612c5b3341c79c0deab3101aeabb/publish/workflow\n\n- 哔哩哔哩:https://space.bilibili.com/26978344?spm_id_from=333.337.0.0\n\n- YouTube:https://www.youtube.com/@comfyui-LLM-party\n\n- discord:https://discord.com/invite/gxrQAYy6\n\n- QQ交流群:931057213\n\n- 微信交流群:Choo-Yong(添加小助理微信,统一通过后会添加至交流群)\n"],"color":"#432","bgcolor":"#653","shape":1},{"id":97,"type":"Note","pos":[-30,660],"size":[320,230],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["#工作流参数配置:\n【加载文件】\n- absolute_path: 填写本地文件的路径。填写示例:C:\\Users\\Documents\\Pro\\myfile.txt\n\n- relative_path:将本地的文件放入ComfyUI/custom_nodes/comfyui_LLM_party/file中,重新加载ComfyUI并且刷新前端界面即可自动识别文件。\n\n- Note:absolute_path和relative_path两种加载方式选其一即可,不需要两种方式同时加载文件。\n\n------------------------------------------------\n【词嵌入模型工具/高级词嵌入工具】\n- file_content:可以输入一个字符串,该字符串会被作为词嵌入模型的输入,模型会在这个字符串上进行搜索,根据question来返回最相关的文本内容。\n\n- path:填入向量模型的本地文件位置。示例填写:hy-tmp\\AI_files\\models\\Embedding_Tools\\ChatGLM3\n\n- k:是返回的段落数量,chuck_size为文本分割时,每个文本块的大小,默认为200,chuck_overlap为文本分割时,每个文本块之间的重叠大小,默认为50。\n\n- device:一般默认状态为[auto],自动选择你的cuda/mps/cpu设备,可根据实际情况进行调整。\n\n- chuck_size:文本分割时,每个文本块的大小,默认为200。\n\n- chuck_overlap:文本分割时,每个文本块之间的重叠大小,默认为50。\n\n- file_name:代表了知识库的名字,在与LLM交互时,可以直接通过file_name来调用对应的知识库。\n\n- base_path:本地向量数据库的加载路径,如果你输入了base_path,数据库将从已经生成过的本地向量库加载,如果没有填入,则会从file_content的内容中生成一个向量数据库。\n"],"color":"#c09430","bgcolor":"#653"},{"id":84,"type":"load_file","pos":[-30,490],"size":[310,130],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"file_content","type":"STRING","links":[108],"slot_index":0,"shape":3,"label":"file_content"}],"properties":{"Node name for S&R":"load_file"},"widgets_values":["","README_ZH.md",true],"shape":1},{"id":82,"type":"advance_ebd_tool","pos":[320,600],"size":[320,290],"flags":{},"order":6,"mode":0,"inputs":[{"name":"ebd_model","type":"EBD_MODEL","link":null,"slot_index":0,"label":"ebd_model","shape":7},{"name":"file_content","type":"STRING","link":108,"slot_index":1,"widget":{"name":"file_content"},"label":"file_content"}],"outputs":[{"name":"tool","type":"STRING","links":[113],"slot_index":0,"shape":3,"label":"tool"}],"properties":{"Node name for S&R":"advance_ebd_tool"},"widgets_values":["/root/AI/models/Embedding_Tools/glm-4-9b-chat","enable",5,"auto",200,50,"readme","",""],"shape":1},{"id":79,"type":"advance_ebd_tool","pos":[320,310],"size":[320,250],"flags":{},"order":5,"mode":0,"inputs":[{"name":"ebd_model","type":"EBD_MODEL","link":null,"slot_index":0,"label":"ebd_model","shape":7},{"name":"file_content","type":"STRING","link":105,"slot_index":1,"widget":{"name":"file_content"},"label":"file_content"}],"outputs":[{"name":"tool","type":"STRING","links":[112],"slot_index":0,"shape":3,"label":"tool"}],"properties":{"Node name for S&R":"advance_ebd_tool"},"widgets_values":["/root/AI/models/Embedding_Tools/glm-4-9b-chat","enable",5,"auto",200,50,"test","",""],"shape":1}],"links":[[105,76,0,79,1,"STRING"],[108,84,0,82,1,"STRING"],[112,79,0,87,0,"STRING"],[113,82,0,87,1,"STRING"],[115,90,0,89,0,"CUSTOM"],[116,87,0,89,5,"STRING"],[117,89,0,91,0,"STRING"]],"groups":[{"id":1,"title":"词嵌入向量模型(Advanced)","bounding":[310,230,340,670],"color":"#3f789e","font_size":24,"flags":{}},{"id":2,"title":"Files","bounding":[-40,230,340,670],"color":"#3f789e","font_size":24,"flags":{}},{"id":3,"title":"Group","bounding":[660,230,340,214],"color":"#3f789e","font_size":24,"flags":{}},{"id":4,"title":"LLM Party for Multi-RAG 多知识库分别调用","bounding":[-390,60,2276,158],"color":"#3f789e","font_size":118,"flags":{}},{"id":5,"title":"Tools Combine","bounding":[660,460,340,440],"color":"#3f789e","font_size":24,"flags":{}},{"id":6,"title":"LLM Apply ","bounding":[1010,230,467,674],"color":"#3f789e","font_size":24,"flags":{}},{"id":7,"title":"Text OutPut","bounding":[1490,230,400,670],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.7247295000000079,"offset":[284.82306368347133,235.21850901175333]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@heshengtao
Copy link
Owner

这个issues写得非常全面!终于有人正常写issues了hhhhhh相当nice
我猜测你是不是把一个聊天模型当作词嵌入模型填进去了,你可以去hf或我项目主页的网盘里找一下bge zh这个模型,这种就是词嵌入模型,用来做rag的

@cenzijing
Copy link
Author

这个issues写得非常全面!终于有人正常写issues了hhhhhh相当nice 我猜测你是不是把一个聊天模型当作词嵌入模型填进去了,你可以去hf或我项目主页的网盘里找一下bge zh这个模型,这种就是词嵌入模型,用来做rag的

啊啊啊非常感谢大佬!这是我的第一个issues!你的party真的非常好用!!!!

@cenzijing
Copy link
Author

这个issues写得非常全面!终于有人正常写issues了hhhhhh相当nice 我猜测你是不是把一个聊天模型当作词嵌入模型填进去了,你可以去hf或我项目主页的网盘里找一下bge zh这个模型,这种就是词嵌入模型,用来做rag的

确实我是把一个聊天的模型填进去了,由于我不是很清楚那个词嵌入模型是什么/(ㄒoㄒ)/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants