快速入门#
先决条件#
支持的设备#
Atlas A2训练系列(Atlas 800T A2、Atlas 900 A2 PoD、Atlas 200T A2 Box16、Atlas 300T A2)
Atlas 800I A2推理系列 (Atlas 800I A2)
使用容器设置环境#
# Update DEVICE according to your device (/dev/davinci[0-7])
export DEVICE=/dev/davinci0
# Update the vllm-ascend image
export IMAGE=quay.io/ascend/vllm-ascend:v0.7.3.post1
docker run --rm \
--name vllm-ascend \
--device $DEVICE \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
默认工作目录是/workspace,vLLM和vLLM Ascend代码存放在/vllm-workspace中,并以开发模式(pip install -e)安装,这样开发者可以立即应用更改而无需重新安装。
(可选) 安装MindIE Turbo#
安装MindIE Turbo以获得性能加速:
pip install mindie_turbo==2.0rc1
使用方法#
您可以使用Modelscope镜像来加速下载:
export VLLM_USE_MODELSCOPE=true
在昇腾NPU上启动vLLM有两种方式:
安装vLLM后,您可以开始为输入提示列表生成文本(即离线批量推理)。
尝试直接运行以下Python脚本或使用python3交互式环境生成文本:
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
# The first run will take about 3-5 mins (10 MB/s) to download models
llm = LLM(model="Qwen/Qwen2.5-0.5B-Instruct")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
vLLM也可以部署为实现了OpenAI API协议的服务器。运行以下命令以使用Qwen/Qwen2.5-0.5B-Instruct模型启动vLLM服务器:
# Deploy vLLM server (The first run will take about 3-5 mins (10 MB/s) to download models)
vllm serve Qwen/Qwen2.5-0.5B-Instruct &
如果看到如下日志:
INFO: Started server process [3594]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
恭喜,您已成功启动vLLM服务器!
您可以查询模型列表:
curl http://localhost:8000/v1/models | python3 -m json.tool
您也可以通过输入提示词查询模型:
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen2.5-0.5B-Instruct",
"prompt": "Beijing is a",
"max_tokens": 5,
"temperature": 0
}' | python3 -m json.tool
vLLM作为后台进程运行,您可以使用kill -2 $VLLM_PID优雅地停止后台进程,
这等同于在前台使用Ctrl-C停止vLLM进程:
ps -ef | grep "/.venv/bin/vllm serve" | grep -v grep
VLLM_PID=`ps -ef | grep "/.venv/bin/vllm serve" | grep -v grep | awk '{print $2}'`
kill -2 $VLLM_PID
您将看到如下输出:
INFO: Shutting down FastAPI HTTP server.
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
最后,您可以使用ctrl-D退出容器。