多节点 (DeepSeek)#
多节点在线服务#
在每台机器上运行docker容器:
docker run --rm \
--name vllm-ascend \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2\
--device /dev/davinci3 \
--device /dev/davinci4 \
--device /dev/davinci5 \
--device /dev/davinci6 \
--device /dev/davinci7 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it quay.io/ascend/vllm-ascend:v0.7.3.post1 bash
(可选) 安装MindIE Turbo以获得性能加速:
pip install mindie_turbo==2.0rc1
选择一台机器作为主节点,其他机器作为工作节点,然后在每台机器上启动ray:
注意
通过命令ip addr查看您的nic_name。
# Head node
export HCCL_IF_IP={local_ip}
export GLOO_SOCKET_IFNAME={nic_name}
export TP_SOCKET_IFNAME={nic_name}
export ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export RAY_EXPERIMENTAL_NOSET_ASCEND_RT_VISIBLE_DEVICES=1
ray start --head --num-gpus=8
# Worker node
export HCCL_IF_IP={local_ip}
export ASCEND_PROCESS_LOG_PATH={plog_save_path}
export GLOO_SOCKET_IFNAME={nic_name}
export TP_SOCKET_IFNAME={nic_name}
export RAY_EXPERIMENTAL_NOSET_ASCEND_RT_VISIBLE_DEVICES=1
export ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
ray start --address='{head_node_ip}:{port_num}' --num-gpus=8 --node-ip-address={local_ip}
注意
如果您正在运行DeepSeek V3/R1,请移除config.json文件中的quantization_config配置节,因为vllm-ascend目前不支持该配置。
在头节点启动vLLM服务器:
export VLLM_HOST_IP={head_node_ip}
export HCCL_CONNECT_TIMEOUT=120
export ASCEND_PROCESS_LOG_PATH={plog_save_path}
export HCCL_IF_IP={head_node_ip}
if [ -d "{plog_save_path}" ]; then
rm -rf {plog_save_path}
echo ">>> remove {plog_save_path}"
fi
LOG_FILE="multinode_$(date +%Y%m%d_%H%M).log"
VLLM_TORCH_PROFILER_DIR=./vllm_profile
python -m vllm.entrypoints.openai.api_server \
--model="Deepseek/DeepSeek-V2-Lite-Chat" \
--trust-remote-code \
--enforce-eager \
--max-model-len {max_model_len} \
--distributed_executor_backend "ray" \
--tensor-parallel-size 16 \
--disable-log-requests \
--disable-log-stats \
--disable-frontend-multiprocessing \
--port {port_num} \
服务器启动后,您可以通过输入提示词查询模型:
curl -X POST http://127.0.0.1:{prot_num}/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Deepseek/DeepSeek-V2-Lite-Chat",
"prompt": "The future of AI is",
"max_tokens": 24
}'
如果成功查询服务器,您将看到如下信息(客户端):
{"id":"cmpl-6dfb5a8d8be54d748f0783285dd52303","object":"text_completion","created":1739957835,"model":"/home/data/DeepSeek-V2-Lite-Chat/","choices":[{"index":0,"text":" heavily influenced by neuroscience and cognitiveGuionistes. The goalochondria is to combine the efforts of researchers, technologists,","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":6,"total_tokens":30,"completion_tokens":24,"prompt_tokens_details":null}}
vllm服务器的日志:
INFO: 127.0.0.1:59384 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 02-19 17:37:35 metrics.py:453] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 1.9 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.