2023年8月22日

聊天模型微调的数据准备与分析

,

本笔记本是一个用于预处理和分析聊天数据集的工具,这些数据集用于微调聊天模型。 它会检查格式错误、提供基本统计数据,并估算微调成本的token数量。 此处展示的方法对应gpt-3.5-turbo的当前微调方法。 对于像babbage-002和davinci-002这样的模型,请参阅旧版微调

import json
import tiktoken # for token counting
import numpy as np
from collections import defaultdict
data_path = "data/toy_chat_fine_tuning.jsonl"

# Load the dataset
with open(data_path, 'r', encoding='utf-8') as f:
    dataset = [json.loads(line) for line in f]

# Initial dataset stats
print("Num examples:", len(dataset))
print("First example:")
for message in dataset[0]["messages"]:
    print(message)
Num examples: 5
First example:
{'role': 'system', 'content': 'You are a happy assistant that puts a positive spin on everything.'}
{'role': 'user', 'content': 'I fell off my bike today.'}
{'role': 'assistant', 'content': "It's great that you're getting exercise outdoors!"}

格式验证

我们可以执行各种错误检查,以验证数据集中的每个对话是否符合微调API预期的格式。根据错误性质进行分类,以便更轻松地进行调试。

  1. 数据类型检查: 检查数据集中的每个条目是否为字典(dict)。错误类型: data_type
  2. 消息列表的存在: 检查每个条目中是否存在messages列表。错误类型: missing_messages_list
  3. 消息键检查: 验证messages列表中的每条消息是否包含rolecontent键。错误类型: message_missing_key
  4. 消息中的未识别键: 如果消息包含除rolecontentweightfunction_callname之外的键,则记录日志。错误类型: message_unrecognized_key
  5. 角色验证: 确保role是"system"、"user"或"assistant"中的一个。错误类型: unrecognized_role
  6. 内容验证: 验证content是否包含文本数据且为字符串类型。错误类型: missing_content
  7. 助手消息存在性检查: 确保每个对话至少包含一条来自助手的信息。错误类型: example_missing_assistant_message.

以下代码执行这些检查,并输出发现的每种错误类型的计数。这对于调试和确保数据集准备好进行下一步非常有用。

# Format error checks
format_errors = defaultdict(int)

for ex in dataset:
    if not isinstance(ex, dict):
        format_errors["data_type"] += 1
        continue
        
    messages = ex.get("messages", None)
    if not messages:
        format_errors["missing_messages_list"] += 1
        continue
        
    for message in messages:
        if "role" not in message or "content" not in message:
            format_errors["message_missing_key"] += 1
        
        if any(k not in ("role", "content", "name", "function_call", "weight") for k in message):
            format_errors["message_unrecognized_key"] += 1
        
        if message.get("role", None) not in ("system", "user", "assistant", "function"):
            format_errors["unrecognized_role"] += 1
            
        content = message.get("content", None)
        function_call = message.get("function_call", None)
        
        if (not content and not function_call) or not isinstance(content, str):
            format_errors["missing_content"] += 1
    
    if not any(message.get("role", None) == "assistant" for message in messages):
        format_errors["example_missing_assistant_message"] += 1

if format_errors:
    print("Found errors:")
    for k, v in format_errors.items():
        print(f"{k}: {v}")
else:
    print("No errors found")
No errors found

Token计数工具

让我们定义一些在笔记本后续部分会用到的实用工具。

encoding = tiktoken.get_encoding("cl100k_base")

# not exact!
# simplified from https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1):
    num_tokens = 0
    for message in messages:
        num_tokens += tokens_per_message
        for key, value in message.items():
            num_tokens += len(encoding.encode(value))
            if key == "name":
                num_tokens += tokens_per_name
    num_tokens += 3
    return num_tokens

def num_assistant_tokens_from_messages(messages):
    num_tokens = 0
    for message in messages:
        if message["role"] == "assistant":
            num_tokens += len(encoding.encode(message["content"]))
    return num_tokens

def print_distribution(values, name):
    print(f"\n#### Distribution of {name}:")
    print(f"min / max: {min(values)}, {max(values)}")
    print(f"mean / median: {np.mean(values)}, {np.median(values)}")
    print(f"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}")

数据警告与令牌计数

通过一些轻量级分析,我们可以识别数据集中的潜在问题,比如缺失的消息,并提供关于消息和令牌数量的统计洞察。

  1. 缺少系统/用户消息: 统计缺少"system"或"user"消息的对话数量。这些消息对于定义助手行为和启动对话至关重要。
  2. 每条示例的消息数量: 汇总每条对话中消息数量的分布情况,帮助理解对话复杂度。
  3. 每个示例的总令牌数: 计算并汇总每个对话中总令牌数的分布情况。对于理解微调成本非常重要。
  4. 助手消息中的令牌数: 计算每次对话中助手消息的令牌数量并汇总该分布情况。有助于理解助手的详细程度。
  5. 令牌限制警告: 检查是否有任何示例超出最大令牌限制(16,385个令牌),因为此类示例在微调期间将被截断,可能导致数据丢失。
# Warnings and tokens counts
n_missing_system = 0
n_missing_user = 0
n_messages = []
convo_lens = []
assistant_message_lens = []

for ex in dataset:
    messages = ex["messages"]
    if not any(message["role"] == "system" for message in messages):
        n_missing_system += 1
    if not any(message["role"] == "user" for message in messages):
        n_missing_user += 1
    n_messages.append(len(messages))
    convo_lens.append(num_tokens_from_messages(messages))
    assistant_message_lens.append(num_assistant_tokens_from_messages(messages))
    
print("Num examples missing system message:", n_missing_system)
print("Num examples missing user message:", n_missing_user)
print_distribution(n_messages, "num_messages_per_example")
print_distribution(convo_lens, "num_total_tokens_per_example")
print_distribution(assistant_message_lens, "num_assistant_tokens_per_example")
n_too_long = sum(l > 16385 for l in convo_lens)
print(f"\n{n_too_long} examples may be over the 16,385 token limit, they will be truncated during fine-tuning")
Num examples missing system message: 1
Num examples missing user message: 1

#### Distribution of num_messages_per_example:
min / max: 2, 9
mean / median: 3.8, 3.0
p5 / p95: 2.0, 6.6000000000000005

#### Distribution of num_total_tokens_per_example:
min / max: 26, 8032
mean / median: 1648.4, 45.0
p5 / p95: 26.8, 4863.6

#### Distribution of num_assistant_tokens_per_example:
min / max: 4, 8000
mean / median: 1610.2, 10.0
p5 / p95: 6.0, 4811.200000000001

0 examples may be over the 16,385 token limit, they will be truncated during fine-tuning

成本估算

在最后这一部分,我们预估了微调将使用的总token数量,这有助于我们估算成本。值得注意的是,微调任务的持续时间也会随着token数量的增加而延长。

# Pricing and default n_epochs estimate
MAX_TOKENS_PER_EXAMPLE = 16385

TARGET_EPOCHS = 3
MIN_TARGET_EXAMPLES = 100
MAX_TARGET_EXAMPLES = 25000
MIN_DEFAULT_EPOCHS = 1
MAX_DEFAULT_EPOCHS = 25

n_epochs = TARGET_EPOCHS
n_train_examples = len(dataset)
if n_train_examples * TARGET_EPOCHS < MIN_TARGET_EXAMPLES:
    n_epochs = min(MAX_DEFAULT_EPOCHS, MIN_TARGET_EXAMPLES // n_train_examples)
elif n_train_examples * TARGET_EPOCHS > MAX_TARGET_EXAMPLES:
    n_epochs = max(MIN_DEFAULT_EPOCHS, MAX_TARGET_EXAMPLES // n_train_examples)

n_billing_tokens_in_dataset = sum(min(MAX_TOKENS_PER_EXAMPLE, length) for length in convo_lens)
print(f"Dataset has ~{n_billing_tokens_in_dataset} tokens that will be charged for during training")
print(f"By default, you'll train for {n_epochs} epochs on this dataset")
print(f"By default, you'll be charged for ~{n_epochs * n_billing_tokens_in_dataset} tokens")
Dataset has ~4306 tokens that will be charged for during training
By default, you'll train for 20 epochs on this dataset
By default, you'll be charged for ~86120 tokens