跳至内容

结构化输出

vLLM支持使用xgrammarguidance作为后端生成结构化输出。本文档将向您展示一些可用于生成结构化输出的不同选项示例。

在线服务 (OpenAI API)

您可以使用OpenAI的CompletionsChat API生成结构化输出。

支持以下参数,必须作为额外参数添加:

  • guided_choice: 输出结果将严格限定为给定选项之一。
  • guided_regex: 输出将遵循正则表达式模式。
  • guided_json: 输出将遵循JSON模式。
  • guided_grammar: 输出将遵循上下文无关文法。
  • structural_tag: 在生成文本中遵循指定标签集合内的JSON模式。

你可以在OpenAI兼容服务器页面查看完整支持的参数列表。

OpenAI兼容服务器默认支持结构化输出。您可以通过设置--guided-decoding-backend标志为vllm serve来指定要使用的后端。默认后端是auto,它将根据请求的详细信息尝试选择合适的后端。您也可以选择特定的后端以及一些选项。完整的选项集可在vllm serve --help文本中找到。

现在让我们来看每种情况下的示例,从guided_choice开始,因为它是最简单的:

Code
from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="-",
)
model = client.models.list().data[0].id

completion = client.chat.completions.create(
    model=model,
    messages=[
        {"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
    ],
    extra_body={"guided_choice": ["positive", "negative"]},
)
print(completion.choices[0].message.content)

下一个示例展示了如何使用guided_regex。其思路是根据一个简单的正则表达式模板生成电子邮件地址:

Code
completion = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "Generate an example email address for Alan Turing, who works in Enigma. End in .com and new line. Example result: [email protected]\n",
        }
    ],
    extra_body={"guided_regex": r"\w+@\w+\.com\n", "stop": ["\n"]},
)
print(completion.choices[0].message.content)

结构化文本生成中最相关的功能之一是生成具有预定义字段和格式的有效JSON选项。为此,我们可以通过两种方式使用guided_json参数:

下一个示例展示如何将guided_json参数与Pydantic模型结合使用:

Code
from pydantic import BaseModel
from enum import Enum

class CarType(str, Enum):
    sedan = "sedan"
    suv = "SUV"
    truck = "Truck"
    coupe = "Coupe"

class CarDescription(BaseModel):
    brand: str
    model: str
    car_type: CarType

json_schema = CarDescription.model_json_schema()

completion = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "Generate a JSON with the brand, model and car_type of the most iconic car from the 90's",
        }
    ],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "car-description",
            "schema": CarDescription.model_json_schema()
        },
    },
)
print(completion.choices[0].message.content)

提示

虽然不是绝对必要,但通常在提示中指明JSON模式以及字段应如何填充会更好。在大多数情况下,这可以显著改善结果。

最后我们还有guided_grammar选项,这可能是最难使用的功能,但它确实非常强大。它允许我们定义完整的语言,比如SQL查询。其工作原理是通过使用上下文无关的EBNF语法。举个例子,我们可以用它来定义一种特定格式的简化SQL查询:

Code
simplified_sql_grammar = """
    root ::= select_statement

    select_statement ::= "SELECT " column " from " table " where " condition

    column ::= "col_1 " | "col_2 "

    table ::= "table_1 " | "table_2 "

    condition ::= column "= " number

    number ::= "1 " | "2 "
"""

completion = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "Generate an SQL query to show the 'username' and 'email' from the 'users' table.",
        }
    ],
    extra_body={"guided_grammar": simplified_sql_grammar},
)
print(completion.choices[0].message.content)

另请参阅:完整示例

推理输出

你也可以将结构化输出与推理模型结合使用。

vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --reasoning-parser deepseek_r1

请注意,您可以将推理功能与任何提供的结构化输出特性结合使用。以下示例展示了与JSON模式结合使用的场景:

Code
from pydantic import BaseModel


class People(BaseModel):
    name: str
    age: int


completion = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "Generate a JSON with the name and age of one random person.",
        }
    ],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "people",
            "schema": People.model_json_schema()
        }
    },
)
print("reasoning_content: ", completion.choices[0].message.reasoning_content)
print("content: ", completion.choices[0].message.content)

另请参阅:完整示例

实验性自动解析(OpenAI API)

本节介绍OpenAI在client.chat.completions.create()方法上封装的beta版包装器,该包装器提供了与Python特定类型更丰富的集成功能。

在撰写本文时(openai==1.54.4),这是OpenAI客户端库中的一个"测试版"功能。代码参考可以查看这里

以下示例中,vLLM是通过vllm serve meta-llama/Llama-3.1-8B-Instruct命令进行设置的

以下是一个简单示例,展示如何使用Pydantic模型获取结构化输出:

Code
from pydantic import BaseModel
from openai import OpenAI

class Info(BaseModel):
    name: str
    age: int

client = OpenAI(base_url="http://0.0.0.0:8000/v1", api_key="dummy")
model = client.models.list().data[0].id
completion = client.beta.chat.completions.parse(
    model=model,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "My name is Cameron, I'm 28. What's my name and age?"},
    ],
    response_format=Info,
)

message = completion.choices[0].message
print(message)
assert message.parsed
print("Name:", message.parsed.name)
print("Age:", message.parsed.age)
ParsedChatCompletionMessage[Testing](content='{"name": "Cameron", "age": 28}', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[], parsed=Testing(name='Cameron', age=28))
Name: Cameron
Age: 28

以下是一个更复杂的示例,使用嵌套的Pydantic模型来处理分步数学解题过程:

Code
from typing import List
from pydantic import BaseModel
from openai import OpenAI

class Step(BaseModel):
    explanation: str
    output: str

class MathResponse(BaseModel):
    steps: list[Step]
    final_answer: str

completion = client.beta.chat.completions.parse(
    model=model,
    messages=[
        {"role": "system", "content": "You are a helpful expert math tutor."},
        {"role": "user", "content": "Solve 8x + 31 = 2."},
    ],
    response_format=MathResponse,
)

message = completion.choices[0].message
print(message)
assert message.parsed
for i, step in enumerate(message.parsed.steps):
    print(f"Step #{i}:", step)
print("Answer:", message.parsed.final_answer)

输出:

ParsedChatCompletionMessage[MathResponse](content='{ "steps": [{ "explanation": "First, let\'s isolate the term with the variable \'x\'. To do this, we\'ll subtract 31 from both sides of the equation.", "output": "8x + 31 - 31 = 2 - 31"}, { "explanation": "By subtracting 31 from both sides, we simplify the equation to 8x = -29.", "output": "8x = -29"}, { "explanation": "Next, let\'s isolate \'x\' by dividing both sides of the equation by 8.", "output": "8x / 8 = -29 / 8"}], "final_answer": "x = -29/8" }', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[], parsed=MathResponse(steps=[Step(explanation="First, let's isolate the term with the variable 'x'. To do this, we'll subtract 31 from both sides of the equation.", output='8x + 31 - 31 = 2 - 31'), Step(explanation='By subtracting 31 from both sides, we simplify the equation to 8x = -29.', output='8x = -29'), Step(explanation="Next, let's isolate 'x' by dividing both sides of the equation by 8.", output='8x / 8 = -29 / 8')], final_answer='x = -29/8'))
Step #0: explanation="First, let's isolate the term with the variable 'x'. To do this, we'll subtract 31 from both sides of the equation." output='8x + 31 - 31 = 2 - 31'
Step #1: explanation='By subtracting 31 from both sides, we simplify the equation to 8x = -29.' output='8x = -29'
Step #2: explanation="Next, let's isolate 'x' by dividing both sides of the equation by 8." output='8x / 8 = -29 / 8'
Answer: x = -29/8

使用structural_tag的示例可以在这里找到: examples/online_serving/structured_outputs

离线推理

离线推理支持相同类型的结构化输出。要使用它,我们需要在SamplingParams中通过GuidedDecodingParams类来配置引导式解码。GuidedDecodingParams中提供的主要选项包括:

  • json
  • regex
  • choice
  • grammar
  • structural_tag

这些参数的使用方式与上述在线服务示例中的参数相同。下面展示了choice参数的一个使用示例:

Code
from vllm import LLM, SamplingParams
from vllm.sampling_params import GuidedDecodingParams

llm = LLM(model="HuggingFaceTB/SmolLM2-1.7B-Instruct")

guided_decoding_params = GuidedDecodingParams(choice=["Positive", "Negative"])
sampling_params = SamplingParams(guided_decoding=guided_decoding_params)
outputs = llm.generate(
    prompts="Classify this sentiment: vLLM is wonderful!",
    sampling_params=sampling_params,
)
print(outputs[0].outputs[0].text)

另请参阅:完整示例

优云智算