pb.adapters.create
pb.adapters.create 通过启动一个新的(阻塞式)微调任务来创建一个新的适配器
参数:
config: FinetuningConfig, 默认 None
dataset: str, default None
用于微调的数据集
continue_from_version: str, default None
从中继续训练的适配器版本
repo: str, default None
用于存储新创建适配器的适配器仓库名称
描述: str, 默认 None
适配器的描述
show_tensorboard: bool, 默认值 False
如果为true,则启动一个tensorboard实例来查看训练日志
返回:
适配器
示例:
示例1:使用默认设置创建适配器
# Create an adapter repository
repo = pb.repos.create(name="news-summarizer-model", description="TLDR News Summarizer Experiments", exists_ok=True)
# Start a fine-tuning job, blocks until training is finished
adapter = pb.adapters.create(
config=FinetuningConfig(
base_model="mistral-7b"
),
dataset="tldr_news",
repo=repo,
description="initial model with defaults"
)
示例2:使用自定义参数创建新的适配器
# Create an adapter repository
repo = pb.repos.create(name="news-summarizer-model", description="TLDR News Summarizer Experiments", exists_ok=True)
# Start a fine-tuning job with custom parameters, blocks until training is finished
adapter = pb.adapters.create(
config=FinetuningConfig(
base_model="mistral-7b",
task="instruction_tuning",
epochs=1, # default: 3
rank=8, # default: 16
learning_rate=0.0001, # default: 0.0002
target_modules=["q_proj", "v_proj", "k_proj"], # default: None (infers [q_proj, v_proj] for mistral-7b)
),
dataset="tldr_news",
repo=repo,
description="changing epochs, rank, and learning rate"
)
示例3:在现有适配器版本的基础上继续训练以创建新适配器
adapter = pb.adapters.create(
# Note: only `epochs` and `enable_early_stopping` are available parameters in this case.
config=FinetuningConfig(
epochs=3, # The maximum number of ADDITIONAL epochs to train for
enable_early_stopping=False,
),
continue_from_version="myrepo/3", # The adapter version to resume training from
dataset="mydataset",
repo="myrepo"
)
示例4:通过在现有适配器版本的特定检查点上继续训练来创建新适配器
adapter = pb.adapters.create(
# Note: only `epochs` and `enable_early_stopping` are available parameters in this case.
config=FinetuningConfig(
epochs=3, # The maximum number of ADDITIONAL epochs to train for
enable_early_stopping=False,
),
continue_from_version="myrepo/3@11", # Resumes from checkpoint 11 of `myrepo/3`.
dataset="mydataset",
repo="myrepo"
)
异步微调任务
pb.finetuning.jobs.create 启动一个非阻塞的微调任务
参数:
config: FinetuningConfig, 默认 None
dataset: str, default None
用于微调的数据集
repo: str, default None
用于存储新创建适配器的适配器仓库名称
描述: str, 默认 None
适配器的描述
watch: boolean, default False
定义是否阻塞直到微调任务完成
返回:
微调任务
adapter: FinetuningJob = pb.finetuning.jobs.create(
config=FinetuningConfig(
base_model="mistral-7b",
task="instruction_tuning",
epochs=1, # default: 3
rank=8, # default: 16
learning_rate=0.0001 # default: 0.0002
target_modules=["q_proj", "v_proj", "k_proj"], # default: None (infers [q_proj, v_proj] for mistral-7b)
),
dataset=dataset,
repo=repo,
description="changing epochs, rank, and learning rate",
watch=False,
)