runtime error
Exit code: 1. Reason: , 1.33MB/s] chat_template.jinja: 0%| | 0.00/1.38k [00:00<?, ?B/s][A chat_template.jinja: 100%|██████████| 1.38k/1.38k [00:00<00:00, 5.57MB/s] config.json: 0%| | 0.00/1.31k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.31k/1.31k [00:00<00:00, 5.14MB/s] `torch_dtype` is deprecated! Use `dtype` instead! model.safetensors: 0%| | 0.00/740M [00:00<?, ?B/s][A model.safetensors: 9%|▉ | 69.0M/740M [00:01<00:13, 48.4MB/s][A model.safetensors: 100%|██████████| 740M/740M [00:02<00:00, 359MB/s] Traceback (most recent call last): File "/app/app.py", line 53, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 373, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3951, in from_pretrained model, missing_keys, unexpected_keys, mismatched_keys, offload_index, error_msgs = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4058, in _load_pretrained_model caching_allocator_warmup(model, expanded_device_map, hf_quantizer) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4591, in caching_allocator_warmup device_memory = torch_accelerator_module.mem_get_info(index)[0] File "/usr/local/lib/python3.10/site-packages/torch/cuda/memory.py", line 838, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 489, in cudart _lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...