Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

internlm2.5-chat-gptq-int4运行时报错 #2013

Open
1 of 3 tasks
c935289832 opened this issue Aug 5, 2024 · 1 comment
Open
1 of 3 tasks

internlm2.5-chat-gptq-int4运行时报错 #2013

c935289832 opened this issue Aug 5, 2024 · 1 comment
Labels
Milestone

Comments

@c935289832
Copy link

System Info / 系統信息

ubuntu20.04

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

Version info / 版本信息

0.13.2

The command used to start Xinference / 用以启动 xinference 的命令

XINFERENCE_MODEL_SRC=modelscope xinference-local --host 0.0.0.0 --port 9997

Reproduction / 复现过程

2024-08-05 10:03:15,594 xinference.api.restful_api 63143 ERROR [address=0.0.0.0:43027, pid=65671] shape '[-1, 6, 128, 4096]' is invalid for input of size 4096
Traceback (most recent call last):
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/api/restful_api.py", line 847, in launch_model
model_uid = await (await self._get_supervisor_ref()).launch_builtin_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/context.py", line 227, in send
return self._process_result_message(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/pool.py", line 659, in send
result = await self._run_coro(message.message_id, coro)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/pool.py", line 370, in _run_coro
return await coro
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/api.py", line 384, in on_receive
return await super().on_receive(message) # type: ignore
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 558, in on_receive
raise ex
File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive
async with self._lock:
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive
with debug_async_timeout('actor_lock_timeout',
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive
result = await result
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/core/supervisor.py", line 988, in launch_builtin_model
await _launch_model()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/core/supervisor.py", line 952, in _launch_model
await _launch_one_model(rep_model_uid)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/core/supervisor.py", line 932, in _launch_one_model
await worker_ref.launch_builtin_model(
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 284, in __pyx_actor_method_wrapper
async with lock:
File "xoscar/core.pyx", line 287, in xoscar.core.__pyx_actor_method_wrapper
result = await result
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/core/utils.py", line 45, in wrapped
ret = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/core/worker.py", line 841, in launch_builtin_model
await model_ref.load()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/context.py", line 227, in send
return self._process_result_message(result)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/pool.py", line 659, in send
result = await self._run_coro(message.message_id, coro)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/backends/pool.py", line 370, in _run_coro
return await coro
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xoscar/api.py", line 384, in on_receive
return await super().on_receive(message) # type: ignore
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 558, in on_receive
raise ex
File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive
async with self._lock:
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive
with debug_async_timeout('actor_lock_timeout',
^^^^^^^^^^^^^^^^^
File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive
result = await result
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/core/model.py", line 295, in load
self._model.load()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/xinference/model/llm/vllm/core.py", line 233, in load
self._engine = AsyncLLMEngine.from_engine_args(engine_args)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 386, in from_engine_args
engine = cls(
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 340, in init
self.engine = self._init_engine(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 462, in _init_engine
return engine_class(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 222, in init
self.model_executor = executor_class(
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 41, in init
self._init_executor()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/gpu_executor.py", line 24, in _init_executor
self.driver_worker.load_model()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/worker.py", line 121, in load_model
self.model_runner.load_model()
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 134, in load_model
self.model = get_model(
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 21, in get_model
return loader.load_model(model_config=model_config,
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 243, in load_model
model.load_weights(
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/internlm2.py", line 316, in load_weights
loaded_weight = loaded_weight.view(-1, 2 + kv_groups,
^^^^^^^^^^^^^^^^^
RuntimeError: [address=0.0.0.0:43027, pid=65671] shape '[-1, 6, 128, 4096]' is invalid for input of size 4096

Expected behavior / 期待表现

能运行

@XprobeBot XprobeBot added the gpu label Aug 5, 2024
@XprobeBot XprobeBot added this to the v0.14.0 milestone Aug 5, 2024
@soulzzz
Copy link

soulzzz commented Aug 14, 2024

same here,any fix for this?

@XprobeBot XprobeBot modified the milestones: v0.14, v0.15 Sep 3, 2024
@XprobeBot XprobeBot modified the milestones: v0.15, v0.16 Oct 30, 2024
@XprobeBot XprobeBot modified the milestones: v0.16, v1.x Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants