What is the minimum vLLM version required to serve this model ?
#6
by
						
thameem-abbas
	
							
						- opened
							
					
[2025-10-01 00:48:59] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     activation_callable = lambda o, i: self.activation(activation, o, i)
[2025-10-01 00:48:59] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:59] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/modular_kernel.py", line 413, in activation
[2025-10-01 00:48:59] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     raise ValueError(f"Unsupported FusedMoe activation: {activation}")
[2025-10-01 00:48:59] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654] ValueError: Unsupported FusedMoe activation: swigluoai
I get this on vLLM 0.10.2.
Hardware : 2xH100
vLLM command : python3 -m vllm.entrypoints.openai.api_server --model /mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1 --port  --host 0.0.0.0 --trust-remote-code --no-enable-prefix-caching --tensor-parallel-size 2
Full log below:
[2025-10-01 00:46:38] [8e6790a6] INFO: Starting vLLM server: python3 -m vllm.entrypoints.openai.api_server --model /mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1 --port 48749 --host 0.0.0.0 --trust-remote-code --no-enable-prefix-caching --tensor-parallel-size 2
[2025-10-01 00:46:38] [8e6790a6] INFO: Environment variables: {'VLLM_USE_V1': '1', 'HF_HOME': '/mnt/models', 'MIN_GPUS': '2', 'VLLM_STARTUP_TIMEOUT': '900'}
[2025-10-01 00:46:38] [8e6790a6] INFO: Waiting for vLLM server to be ready at http://localhost:48749/health (timeout: 900s)
[2025-10-01 00:46:38] [8e6790a6] INFO: /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
[2025-10-01 00:46:38] [8e6790a6] INFO: import pynvml  # type: ignore[import]
[2025-10-01 00:46:42] [8e6790a6] INFO: INFO 10-01 00:46:42 [__init__.py:216] Automatically detected platform cuda.
[2025-10-01 00:46:43] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m INFO 10-01 00:46:43 [api_server.py:1896] vLLM API server version 0.10.2
[2025-10-01 00:46:43] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m INFO 10-01 00:46:43 [utils.py:328] non-default args: {'host': '0.0.0.0', 'port': 48749, 'model': '/mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1', 'trust_remote_code': True, 'tensor_parallel_size': 2, 'enable_prefix_caching': False}
[2025-10-01 00:46:43] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
[2025-10-01 00:46:50] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m INFO 10-01 00:46:50 [__init__.py:742] Resolved architecture: GptOssForCausalLM
[2025-10-01 00:46:50] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-01 00:46:50] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m INFO 10-01 00:46:50 [__init__.py:1815] Using max model len 131072
[2025-10-01 00:46:51] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m WARNING 10-01 00:46:51 [_ipex_ops.py:16] Import error msg: No module named 'intel_extension_for_pytorch'
[2025-10-01 00:46:51] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m INFO 10-01 00:46:51 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=8192.
[2025-10-01 00:46:51] [8e6790a6] INFO: [1;36m(APIServer pid=17684)[0;0m INFO 10-01 00:46:51 [config.py:284] Overriding max cuda graph capture size to 1024 for performance.
[2025-10-01 00:46:53] [8e6790a6] INFO: /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
[2025-10-01 00:46:53] [8e6790a6] INFO: import pynvml  # type: ignore[import]
[2025-10-01 00:46:56] [8e6790a6] INFO: INFO 10-01 00:46:56 [__init__.py:216] Automatically detected platform cuda.
[2025-10-01 00:46:59] [8e6790a6] INFO: [1;36m(EngineCore_DP0 pid=17714)[0;0m INFO 10-01 00:46:59 [core.py:654] Waiting for init message from front-end.
[2025-10-01 00:46:59] [8e6790a6] INFO: [1;36m(EngineCore_DP0 pid=17714)[0;0m INFO 10-01 00:46:59 [core.py:76] Initializing a V1 LLM engine (v0.10.2) with config: model='/mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1', speculative_config=None, tokenizer='/mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=compressed-tensors, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend='openai_gptoss'), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1, enable_prefix_caching=False, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":1,"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[1024,1008,992,976,960,944,928,912,896,880,864,848,832,816,800,784,768,752,736,720,704,688,672,656,640,624,608,592,576,560,544,528,512,496,480,464,448,432,416,400,384,368,352,336,320,304,288,272,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":1024,"local_cache_dir":null}
[2025-10-01 00:46:59] [8e6790a6] INFO: [1;36m(EngineCore_DP0 pid=17714)[0;0m INFO 10-01 00:46:59 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 16777216, 10, 'psm_5e4c765c'), local_subscribe_addr='ipc:///tmp/28e529d2-4d25-4858-b46b-dc06d78f0563', remote_subscribe_addr=None, remote_addr_ipv6=False)
[2025-10-01 00:46:59] [8e6790a6] INFO: /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
[2025-10-01 00:46:59] [8e6790a6] INFO: import pynvml  # type: ignore[import]
[2025-10-01 00:46:59] [8e6790a6] INFO: /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
[2025-10-01 00:46:59] [8e6790a6] INFO: import pynvml  # type: ignore[import]
[2025-10-01 00:47:03] [8e6790a6] INFO: INFO 10-01 00:47:03 [__init__.py:216] Automatically detected platform cuda.
[2025-10-01 00:47:03] [8e6790a6] INFO: INFO 10-01 00:47:03 [__init__.py:216] Automatically detected platform cuda.
[2025-10-01 00:47:05] [8e6790a6] INFO: W1001 00:47:05.950000 17731 torch/utils/cpp_extension.py:2425] TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
[2025-10-01 00:47:05] [8e6790a6] INFO: W1001 00:47:05.950000 17731 torch/utils/cpp_extension.py:2425] If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'] to specific architectures.
[2025-10-01 00:47:05] [8e6790a6] INFO: W1001 00:47:05.951000 17730 torch/utils/cpp_extension.py:2425] TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
[2025-10-01 00:47:05] [8e6790a6] INFO: W1001 00:47:05.951000 17730 torch/utils/cpp_extension.py:2425] If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'] to specific architectures.
[2025-10-01 00:47:07] [8e6790a6] INFO: INFO 10-01 00:47:07 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2f92b90e'), local_subscribe_addr='ipc:///tmp/3d3fd92b-54dc-4843-a2f5-4a997a3e1318', remote_subscribe_addr=None, remote_addr_ipv6=False)
[2025-10-01 00:47:07] [8e6790a6] INFO: INFO 10-01 00:47:07 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_34ef2957'), local_subscribe_addr='ipc:///tmp/35ff3515-e4b0-4132-884c-1c6df2a67d10', remote_subscribe_addr=None, remote_addr_ipv6=False)
[2025-10-01 00:47:08] [8e6790a6] INFO: [W1001 00:47:08.560534533 ProcessGroupNCCL.cpp:981] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator())
[2025-10-01 00:47:08] [8e6790a6] INFO: [W1001 00:47:08.567517017 ProcessGroupNCCL.cpp:981] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator())
[2025-10-01 00:47:08] [8e6790a6] INFO: [rank0]:[W1001 00:47:08.569114459 ProcessGroupGloo.cpp:514] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
[2025-10-01 00:47:13] [8e6790a6] INFO: [rank1]:[W1001 00:47:13.567369118 ProcessGroupGloo.cpp:514] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
[2025-10-01 00:47:13] [8e6790a6] INFO: [Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[2025-10-01 00:47:13] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[2025-10-01 00:47:18] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[2025-10-01 00:47:18] [8e6790a6] INFO: [Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[2025-10-01 00:47:18] [8e6790a6] INFO: INFO 10-01 00:47:18 [__init__.py:1433] Found nccl from library libnccl.so.2
[2025-10-01 00:47:18] [8e6790a6] INFO: INFO 10-01 00:47:18 [__init__.py:1433] Found nccl from library libnccl.so.2
[2025-10-01 00:47:18] [8e6790a6] INFO: INFO 10-01 00:47:18 [pynccl.py:70] vLLM is using nccl==2.27.3
[2025-10-01 00:47:18] [8e6790a6] INFO: INFO 10-01 00:47:18 [pynccl.py:70] vLLM is using nccl==2.27.3
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report.
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report.
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_2dfefd86'), local_subscribe_addr='ipc:///tmp/a2c62c86-6058-46d2-9193-dc753bbf00e6', remote_subscribe_addr=None, remote_addr_ipv6=False)
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[2025-10-01 00:47:19] [8e6790a6] INFO: [Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [parallel_state.py:1165] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [parallel_state.py:1165] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1, EP rank 1
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [topk_topp_sampler.py:58] Using FlashInfer for top-p & top-k sampling.
[2025-10-01 00:47:19] [8e6790a6] INFO: INFO 10-01 00:47:19 [topk_topp_sampler.py:58] Using FlashInfer for top-p & top-k sampling.
[2025-10-01 00:47:19] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:47:19 [gpu_model_runner.py:2338] Starting to load model /mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1...
[2025-10-01 00:47:19] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:47:19 [gpu_model_runner.py:2338] Starting to load model /mnt/models/hub/models--RedHatAI--gpt-oss-120b-FP8-dynamic/snapshots/c3e1b9cd12fd74e3d6dc625f7f80333514342eb1...
[2025-10-01 00:47:19] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:47:19 [gpu_model_runner.py:2370] Loading model from scratch...
[2025-10-01 00:47:19] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:47:19 [gpu_model_runner.py:2370] Loading model from scratch...
[2025-10-01 00:47:19] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:47:19 [cuda.py:362] Using Flash Attention backend on V1 engine.
[2025-10-01 00:47:19] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:47:19 [cuda.py:362] Using Flash Attention backend on V1 engine.
[2025-10-01 00:47:20] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:20] [8e6790a6] INFO: Loading safetensors checkpoint shards:   0% Completed | 0/24 [00:00<?, ?it/s]
[2025-10-01 00:47:22] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:22] [8e6790a6] INFO: Loading safetensors checkpoint shards:   4% Completed | 1/24 [00:02<00:49,  2.16s/it]
[2025-10-01 00:47:24] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:24] [8e6790a6] INFO: Loading safetensors checkpoint shards:   8% Completed | 2/24 [00:04<00:48,  2.21s/it]
[2025-10-01 00:47:30] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:30] [8e6790a6] INFO: Loading safetensors checkpoint shards:  12% Completed | 3/24 [00:10<01:19,  3.81s/it]
[2025-10-01 00:47:32] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:32] [8e6790a6] INFO: Loading safetensors checkpoint shards:  17% Completed | 4/24 [00:12<01:03,  3.19s/it]
[2025-10-01 00:47:34] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:34] [8e6790a6] INFO: Loading safetensors checkpoint shards:  21% Completed | 5/24 [00:14<00:54,  2.88s/it]
[2025-10-01 00:47:36] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:36] [8e6790a6] INFO: Loading safetensors checkpoint shards:  25% Completed | 6/24 [00:16<00:46,  2.59s/it]
[2025-10-01 00:47:38] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:38] [8e6790a6] INFO: Loading safetensors checkpoint shards:  29% Completed | 7/24 [00:18<00:41,  2.46s/it]
[2025-10-01 00:47:41] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:41] [8e6790a6] INFO: Loading safetensors checkpoint shards:  33% Completed | 8/24 [00:21<00:38,  2.39s/it]
[2025-10-01 00:47:43] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:43] [8e6790a6] INFO: Loading safetensors checkpoint shards:  38% Completed | 9/24 [00:23<00:34,  2.29s/it]
[2025-10-01 00:47:45] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:45] [8e6790a6] INFO: Loading safetensors checkpoint shards:  42% Completed | 10/24 [00:25<00:31,  2.22s/it]
[2025-10-01 00:47:51] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:51] [8e6790a6] INFO: Loading safetensors checkpoint shards:  46% Completed | 11/24 [00:31<00:44,  3.43s/it]
[2025-10-01 00:47:53] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:53] [8e6790a6] INFO: Loading safetensors checkpoint shards:  50% Completed | 12/24 [00:33<00:36,  3.02s/it]
[2025-10-01 00:47:55] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:55] [8e6790a6] INFO: Loading safetensors checkpoint shards:  54% Completed | 13/24 [00:35<00:30,  2.73s/it]
[2025-10-01 00:47:57] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:57] [8e6790a6] INFO: Loading safetensors checkpoint shards:  58% Completed | 14/24 [00:37<00:25,  2.53s/it]
[2025-10-01 00:47:59] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:47:59] [8e6790a6] INFO: Loading safetensors checkpoint shards:  62% Completed | 15/24 [00:39<00:21,  2.42s/it]
[2025-10-01 00:48:01] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:01] [8e6790a6] INFO: Loading safetensors checkpoint shards:  67% Completed | 16/24 [00:41<00:18,  2.29s/it]
[2025-10-01 00:48:03] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:03] [8e6790a6] INFO: Loading safetensors checkpoint shards:  71% Completed | 17/24 [00:43<00:15,  2.21s/it]
[2025-10-01 00:48:06] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:06] [8e6790a6] INFO: Loading safetensors checkpoint shards:  75% Completed | 18/24 [00:46<00:13,  2.20s/it]
[2025-10-01 00:48:08] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:08] [8e6790a6] INFO: Loading safetensors checkpoint shards:  79% Completed | 19/24 [00:48<00:11,  2.21s/it]
[2025-10-01 00:48:10] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:10] [8e6790a6] INFO: Loading safetensors checkpoint shards:  83% Completed | 20/24 [00:50<00:08,  2.20s/it]
[2025-10-01 00:48:12] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:12] [8e6790a6] INFO: Loading safetensors checkpoint shards:  88% Completed | 21/24 [00:52<00:06,  2.13s/it]
[2025-10-01 00:48:14] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:14] [8e6790a6] INFO: Loading safetensors checkpoint shards:  92% Completed | 22/24 [00:54<00:04,  2.15s/it]
[2025-10-01 00:48:16] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:16] [8e6790a6] INFO: Loading safetensors checkpoint shards:  96% Completed | 23/24 [00:56<00:02,  2.16s/it]
[2025-10-01 00:48:18] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:18] [8e6790a6] INFO: Loading safetensors checkpoint shards: 100% Completed | 24/24 [00:58<00:00,  2.17s/it]
[2025-10-01 00:48:18] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:18] [8e6790a6] INFO: Loading safetensors checkpoint shards: 100% Completed | 24/24 [00:58<00:00,  2.46s/it]
[2025-10-01 00:48:18] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m
[2025-10-01 00:48:18] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:48:18 [default_loader.py:268] Loading weights took 58.95 seconds
[2025-10-01 00:48:18] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:48:18 [default_loader.py:268] Loading weights took 58.96 seconds
[2025-10-01 00:48:19] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:48:19 [gpu_model_runner.py:2392] Model loading took 55.5590 GiB and 59.224248 seconds
[2025-10-01 00:48:19] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:48:19 [gpu_model_runner.py:2392] Model loading took 55.5590 GiB and 59.266227 seconds
[2025-10-01 00:48:24] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:48:24 [backends.py:539] Using cache directory: /root/.cache/vllm/torch_compile_cache/7789b030fb/rank_1_0/backbone for vLLM's torch.compile
[2025-10-01 00:48:24] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:48:24 [backends.py:550] Dynamo bytecode transform time: 4.48 s
[2025-10-01 00:48:24] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:48:24 [backends.py:539] Using cache directory: /root/.cache/vllm/torch_compile_cache/7789b030fb/rank_0_0/backbone for vLLM's torch.compile
[2025-10-01 00:48:24] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:48:24 [backends.py:550] Dynamo bytecode transform time: 4.93 s
[2025-10-01 00:48:27] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:48:27 [backends.py:194] Cache the graph for dynamic shape for later use
[2025-10-01 00:48:27] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:48:27 [backends.py:194] Cache the graph for dynamic shape for later use
[2025-10-01 00:48:56] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m INFO 10-01 00:48:56 [backends.py:215] Compiling a graph for dynamic shape takes 31.57 s
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP0 pid=17730)[0;0m INFO 10-01 00:48:57 [backends.py:215] Compiling a graph for dynamic shape takes 32.07 s
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654] WorkerProc hit an exception.
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654] Traceback (most recent call last):
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 649, in worker_busy_loop
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     output = func(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]              ^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return func(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 263, in determine_available_memory
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     self.model_runner.profile_run()
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3031, in profile_run
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     = self._dummy_run(self.max_num_tokens, is_profile=True)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return func(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 2809, in _dummy_run
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     outputs = self.model(
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]               ^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._call_impl(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return forward_call(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 671, in forward
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self.model(input_ids, positions, intermediate_tensors,
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 305, in __call__
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     output = self.compiled_callable(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 736, in compile_wrapper
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return fn(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 248, in forward
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     def forward(
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 375, in __call__
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return super().__call__(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._call_impl(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return forward_call(*args, **kwargs)
[2025-10-01 00:48:57] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 929, in _fn
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return fn(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 848, in call_wrapped
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._wrapped_call(self, *args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 424, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     raise e
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 411, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._call_impl(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return forward_call(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "<eval_with_key>.74", line 269, in forward
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     submod_2 = self.submod_2(getitem_3, s72, l_self_modules_layers_modules_0_modules_attn_modules_o_proj_parameters_weight_, getitem_4, l_self_modules_layers_modules_0_modules_post_attention_layernorm_parameters_weight_, l_self_modules_layers_modules_0_modules_mlp_modules_router_parameters_weight_, l_self_modules_layers_modules_0_modules_mlp_modules_router_parameters_bias_, l_self_modules_layers_modules_1_modules_input_layernorm_parameters_weight_, l_self_modules_layers_modules_1_modules_attn_modules_qkv_parameters_weight_, l_self_modules_layers_modules_1_modules_attn_modules_qkv_parameters_bias_, l_positions_, l_self_modules_layers_modules_0_modules_attn_modules_rotary_emb_buffers_cos_sin_cache_);  getitem_3 = l_self_modules_layers_modules_0_modules_attn_modules_o_proj_parameters_weight_ = getitem_4 = l_self_modules_layers_modules_0_modules_post_attention_layernorm_parameters_weight_ = l_self_modules_layers_modules_0_modules_mlp_modules_router_parameters_weight_ = l_self_modules_layers_modules_0_modules_mlp_modules_router_parameters_bias_ = l_self_modules_layers_modules_1_modules_input_layernorm_parameters_weight_ = l_self_modules_layers_modules_1_modules_attn_modules_qkv_parameters_weight_ = l_self_modules_layers_modules_1_modules_attn_modules_qkv_parameters_bias_ = None
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/cuda_graph.py", line 119, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self.runnable(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/cuda_piecewise_backend.py", line 90, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self.compiled_graph_for_general_shape(*args)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_inductor/standalone_compile.py", line 62, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._compiled_fn(*args)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 929, in _fn
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return fn(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1241, in forward
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return compiled_fn(full_args)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 384, in runtime_wrapper
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     all_outs = call_func_at_runtime_with_args(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     out = normalize_as_list(f(args))
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                             ^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 750, in inner_fn
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     outs = compiled_fn(args)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 556, in wrapper
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return compiled_fn(runtime_args)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_inductor/output_code.py", line 584, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self.current_callable(inputs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_inductor/utils.py", line 2716, in run
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     out = model(new_inputs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]           ^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/tmp/torchinductor_root/df/cdffubmdjhginixg5xjjehjrjodxrnppitw75aodtfoy7lea5pv5.py", line 946, in call
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     buf6 = torch.ops.vllm.moe_forward.default(buf4, buf5, 'model.layers.0.mlp.experts')
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 829, in __call__
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._op(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1898, in moe_forward
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self.forward_impl(hidden_states, router_logits)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1787, in forward_impl
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     final_hidden_states = self.quant_method.apply(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                           ^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py", line 874, in apply
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return cutlass_moe_fp8(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/cutlass_moe.py", line 499, in cutlass_moe_fp8
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return fn(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._call_impl(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return forward_call(*args, **kwargs)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/modular_kernel.py", line 859, in forward
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     fused_out = self._maybe_chunk_fused_experts(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/modular_kernel.py", line 630, in _maybe_chunk_fused_experts
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     return self._do_fused_experts(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]            ^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/modular_kernel.py", line 575, in _do_fused_experts
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     self.fused_experts.apply(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/cutlass_moe.py", line 274, in apply
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     run_cutlass_moe_fp8(
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/cutlass_moe.py", line 183, in run_cutlass_moe_fp8
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     activation_callable(act_out, mm1_out)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/cutlass_moe.py", line 268, in <lambda>
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     activation_callable = lambda o, i: self.activation(activation, o, i)
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/modular_kernel.py", line 413, in activation
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654]     raise ValueError(f"Unsupported FusedMoe activation: {activation}")
[2025-10-01 00:48:58] [8e6790a6] INFO: [1;36m(Worker_TP1 pid=17731)[0;0m ERROR 10-01 00:48:57 [multiproc_executor.py:654] ValueError: Unsupported FusedMoe activation: swigluoai
