You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While migrating to torch-2.2.0.dev20231126+cu118 run into the issue. This could be due to the nature that this is a dev release but adding it for tracking purposes.
ERROR main : extract_t/rep-0@1330319
RuntimeError('CUDA error: initialization
error\nCompile with `TORCH_USE_CUDA_DSA` to enable
device-side assertions.\n') during 'WorkerRuntime'
initialization
add "--quiet-error" to suppress the exception
details
Traceback (most recent call last):
File
"/home/greg/dev/marieai/marie-ai/marie/serve/executo…
line 143, in run
runtime = AsyncNewLoopRuntime(
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 93, in __init__
self._loop.run_until_complete(self.async_setup())
File "/usr/lib/python3.10/asyncio/base_events.py",
line 649, in run_until_complete
return future.result()
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 310, in async_setup
self.server = self._get_server()
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 215, in _get_server
return GRPCServer(
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 34, in __init__
super().__init__(**kwargs)
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 70, in __init__
] = (req_handler or self._get_request_handler())
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 95, in _get_request_handler
return self.req_handler_cls(
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 140, in __init__
self._load_executor(
File
"/home/greg/dev/marieai/marie-ai/marie/serve/runtime…
line 379, in _load_executor
self._executor: BaseExecutor =
BaseExecutor.load_config(
File
"/home/greg/dev/marieai/marie-ai/marie/jaml/__init__…
line 792, in load_config
obj = JAML.load(tag_yml, substitute=False,
runtime_args=runtime_args)
File
"/home/greg/dev/marieai/marie-ai/marie/jaml/__init__…
line 174, in load
r = yaml.load(stream,
Loader=get_jina_loader_with_runtime(runtime_args))
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 81, in load
return loader.get_single_data()
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 51, in get_single_data
return self.construct_document(node)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 55, in construct_document
data = self.construct_object(node)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 100, in construct_object
data = constructor(self, node)
File
"/home/greg/dev/marieai/marie-ai/marie/jaml/__init__…
line 582, in _from_yaml
return get_parser(cls,
version=data.get('version', None)).parse(
File
"/home/greg/dev/marieai/marie-ai/marie/jaml/parsers/…
line 46, in parse
obj = cls(
File
"/home/greg/dev/marieai/marie-ai/marie/serve/executo…
line 58, in arg_wrapper
f = func(self, *args, **kwargs)
File
"/home/greg/dev/marieai/marie-ai/marie/serve/helper.…
line 74, in arg_wrapper
f = func(self, *args, **kwargs)
File
"/home/greg/dev/marieai/marie-ai/marie/executor/text…
line 98, in __init__
self.pipeline =
ExtractPipeline(pipeline_config=pipeline,
cuda=use_cuda)
File
"/home/greg/dev/marieai/marie-ai/marie/pipe/extract_…
line 94, in __init__
self.overlay_processor = OverlayProcessor(
File
"/home/greg/dev/marieai/marie-ai/marie/overlay/overl…
line 44, in __init__
self.opt, self.model = self.__setup(cuda,
checkpoint_dir)
File
"/home/greg/dev/marieai/marie-ai/marie/overlay/overl…
line 109, in __setup
model = create_model(opt)
File
"/home/greg/dev/marieai/marie-ai/marie/models/pix2pi…
line 75, in create_model
instance = model(opt)
File
"/home/greg/dev/marieai/marie-ai/marie/models/pix2pi…
line 45, in __init__
self.netG = networks.define_G(opt.input_nc,
opt.output_nc, opt.ngf, opt.netG,
File
"/home/greg/dev/marieai/marie-ai/marie/models/pix2pi…
line 271, in define_G
return init_net(net, init_type, init_gain,
gpu_ids)
File
"/home/greg/dev/marieai/marie-ai/marie/models/pix2pi…
line 151, in init_net
net.to("cuda")
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 1152, in to
return self._apply(convert)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 802, in _apply
module._apply(fn)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 802, in _apply
module._apply(fn)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 825, in _apply
param_applied = fn(param)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 1150, in convert
return t.to(device, dtype if
t.is_floating_point() or t.is_complex() else None,
non_blocking)
File
"/home/greg/dev/marieai/marie-ai/venv/lib/python3.10…
line 302, in _lazy_init
torch._C._cuda_init()
RuntimeError: CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable
device-side assertions.
The text was updated successfully, but these errors were encountered:
Describe the bug
While migrating to
torch-2.2.0.dev20231126+cu118
run into the issue. This could be due to the nature that this is adev
release but adding it for tracking purposes.Describe how you solve it
Use stable version
torch 2.1.1
Environment
Screenshots
The text was updated successfully, but these errors were encountered: