Molmo#
LMDeploy 支持以下 molmo 系列模型,详细信息如下表所示:
模型 |
大小 |
支持的推理引擎 |
---|---|---|
Molmo-7B-D-0924 |
7B |
TurboMind |
Molmo-72-0924 |
72B |
TurboMind |
下一章将演示如何使用LMDeploy部署一个molmo模型,以Molmo-7B-D-0924为例。
安装#
请按照安装指南安装LMDeploy
离线推理#
以下示例代码展示了VLM管道的基本用法。有关详细信息,请参阅VLM离线推理管道
from lmdeploy import pipeline
from lmdeploy.vl import load_image
pipe = pipeline('allenai/Molmo-7B-D-0924')
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe((f'describe this image', image))
print(response)
更多示例如下:
multi-image multi-round conversation, combined images
from lmdeploy import pipeline, GenerationConfig
pipe = pipeline('allenai/Molmo-7B-D-0924', log_level='INFO')
messages = [
dict(role='user', content=[
dict(type='text', text='Describe the two images in detail.'),
dict(type='image_url', image_url=dict(url='https://raw.githubusercontent.com/QwenLM/Qwen-VL/master/assets/mm_tutorial/Beijing_Small.jpeg')),
dict(type='image_url', image_url=dict(url='https://raw.githubusercontent.com/QwenLM/Qwen-VL/master/assets/mm_tutorial/Chongqing_Small.jpeg'))
])
]
out = pipe(messages, gen_config=GenerationConfig(do_sample=False))
messages.append(dict(role='assistant', content=out.text))
messages.append(dict(role='user', content='What are the similarities and differences between these two images.'))
out = pipe(messages, gen_config=GenerationConfig(do_sample=False))
在线服务#
你可以通过lmdeploy serve api_server
CLI启动服务器:
lmdeploy serve api_server allenai/Molmo-7B-D-0924
你也可以使用docker镜像启动服务:
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 23333:23333 \
--ipc=host \
openmmlab/lmdeploy:latest \
lmdeploy serve api_server allenai/Molmo-7B-D-0924
如果您看到以下日志,意味着服务已成功启动。
HINT: Please open http://0.0.0.0:23333 in a browser for detailed api usage!!!
HINT: Please open http://0.0.0.0:23333 in a browser for detailed api usage!!!
HINT: Please open http://0.0.0.0:23333 in a browser for detailed api usage!!!
INFO: Started server process [2439]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:23333 (Press CTRL+C to quit)
lmdeploy serve api_server
的参数可以通过 lmdeploy serve api_server -h
详细查看。
有关api_server
的更多信息以及如何访问该服务,可以从这里找到。