数据格式
本节记录了spaCy使用的数据输入输出格式,包括训练配置、训练数据和词汇数据。有关模型使用的标签方案概述,请参阅模型目录。每个训练好的管道会根据其训练数据记录其组件使用的标签方案。
训练配置 v3.0
配置文件定义了训练流程和管道,可以传递给
spacy train。它们底层使用了
Thinc的配置系统。关于如何使用训练配置的详细信息,请参阅
使用文档。要开始使用针对您用例的推荐设置,请查看
快速入门工具或运行
init config命令。
explosion/spaCy/master/spacy/default_config.cfg
nlp 章节
定义nlp对象、其分词器以及处理管道组件名称。
| 名称 | 描述 |
|---|---|
lang | Pipeline language ISO code. Defaults to null. str |
pipeline | Names of pipeline components in order. Should correspond to sections in the [components] block, e.g. [components.ner]. See docs on defining components. Defaults to []. List[str] |
disabled | Names of pipeline components that are loaded but disabled by default and not run as part of the pipeline. Should correspond to components listed in pipeline. After a pipeline is loaded, disabled components can be enabled using Language.enable_pipe. List[str] |
before_creation | Optional callback to modify Language subclass before it’s initialized. Defaults to null. Optional[Callable[[Type[Language]], Type[Language]]] |
after_creation | Optional callback to modify nlp object right after it’s initialized. Defaults to null. Optional[Callable[[Language],Language]] |
after_pipeline_creation | Optional callback to modify nlp object after the pipeline components have been added. Defaults to null. Optional[Callable[[Language],Language]] |
tokenizer | The tokenizer to use. Defaults to Tokenizer. Callable[[str],Doc] |
batch_size | Default batch size for Language.pipe and Language.evaluate. int |
components 组件部分
本节包含流水线组件及其模型(如果可用)的定义。本节中的组件可以在[nlp]块的pipeline中引用。组件块需要指定一个factory(用于创建组件的命名函数)或一个source(要从中复制组件的训练流水线名称或路径)。详情请参阅定义流水线组件文档。
路径, 系统 变量
这些部分定义了可在其他部分作为变量引用的变量。例如${paths.train}使用了在[paths]块中定义的train值。如果您的配置包含需要路径的自定义注册函数,可以在此处定义它们。所有配置值也可以在运行spacy train时通过CLI覆盖,这对于您不想在配置文件中硬编码的数据路径特别有用。
corpora 章节
本节定义了一个字典映射,将字符串键映射到函数。每个函数接收一个nlp对象并生成Example对象。默认情况下,会指定两个键train和dev,每个键都引用一个Corpus。在进行预训练时,会额外添加一个pretrain部分,默认为JsonlCorpus。您还可以注册返回可调用对象的自定义函数。
| 名称 | 描述 |
|---|---|
train | Training data corpus, typically used in [training] block. Callable[[Language], Iterator[Example]] |
dev | Development data corpus, typically used in [training] block. Callable[[Language], Iterator[Example]] |
pretrain | Raw text for pretraining, typically used in [pretraining] block (if available). Callable[[Language], Iterator[Example]] |
| … | 任何自定义或替代语料库。可调用[[Language], Iterator[Example]] |
或者,[corpora]块可以引用一个函数,该函数返回以语料库名称作为键的字典。如果您想一次性加载单个语料库,然后将其划分为train和dev分区,这会很有用。
| 名称 | 描述 |
|---|---|
corpora | A dictionary keyed by string names, mapped to corpus functions that receive the current nlp object and return an iterator of Example objects. Dict[str, Callable[[Language], Iterator[Example]]] |
训练 部分
本节定义了在运行spacy train时使用的训练和评估过程的设置与控制。
| 名称 | 描述 |
|---|---|
accumulate_gradient | Whether to divide the batch up into substeps. Defaults to 1. int |
batcher | Callable that takes an iterator of Doc objects and yields batches of Docs. Defaults to batch_by_words. Callable[[Iterator[Doc], Iterator[List[Doc]]]] |
before_to_disk | Optional callback to modify nlp object right before it is saved to disk during and after training. Can be used to remove or reset config values or disable components. Defaults to null. Optional[Callable[[Language],Language]] |
before_update v3.5 | Optional callback that is invoked at the start of each training step with the nlp object and a Dict containing the following entries: step, epoch. Can be used to make deferred changes to components. Defaults to null. Optional[Callable[[Language, Dict[str, Any]], None]] |
dev_corpus | Dot notation of the config location defining the dev corpus. Defaults to corpora.dev. str |
dropout | The dropout rate. Defaults to 0.1. float |
eval_frequency | How often to evaluate during training (steps). Defaults to 200. int |
frozen_components | Pipeline component names that are “frozen” and shouldn’t be initialized or updated during training. See here for details. Defaults to []. List[str] |
annotating_components v3.1 | Pipeline component names that should set annotations on the predicted docs during training. See here for details. Defaults to []. List[str] |
gpu_allocator | Library for cupy to route GPU memory allocation to. Can be "pytorch" or "tensorflow". Defaults to variable ${system.gpu_allocator}. str |
logger | Callable that takes the nlp and stdout and stderr IO objects, sets up the logger, and returns two new callables to log a training step and to finalize the logger. Defaults to ConsoleLogger. Callable[[Language, IO, IO], [Tuple[Callable[[Dict[str, Any]], None], Callable[[], None]]]] |
max_epochs | Maximum number of epochs to train for. 0 means an unlimited number of epochs. -1 means that the train corpus should be streamed rather than loaded into memory with no shuffling within the training loop. Defaults to 0. int |
max_steps | Maximum number of update steps to train for. 0 means an unlimited number of steps. Defaults to 20000. int |
optimizer | The optimizer. The learning rate schedule and other settings can be configured as part of the optimizer. Defaults to Adam. Optimizer |
patience | How many steps to continue without improvement in evaluation score. 0 disables early stopping. Defaults to 1600. int |
score_weights | Score names shown in metrics mapped to their weight towards the final weighted score. See here for details. Defaults to {}. Dict[str, float] |
seed | The random seed. Defaults to variable ${system.seed}. int |
train_corpus | Dot notation of the config location defining the train corpus. Defaults to corpora.train. str |
pretraining 预训练可选
本节为可选内容,定义了语言模型预训练的设置与控制选项。当您运行spacy pretrain命令时会使用这些配置。
| 名称 | 描述 |
|---|---|
max_epochs | Maximum number of epochs. Defaults to 1000. int |
dropout | The dropout rate. Defaults to 0.2. float |
n_save_every | Saving frequency. Defaults to null. Optional[int] |
objective | The pretraining objective. Defaults to {"type": "characters", "n_characters": 4}. Dict[str, Any] |
optimizer | The optimizer. The learning rate schedule and other settings can be configured as part of the optimizer. Defaults to Adam. Optimizer |
corpus | Dot notation of the config location defining the corpus with raw text. Defaults to corpora.pretrain. str |
batcher | Callable that takes an iterator of Doc objects and yields batches of Docs. Defaults to batch_by_words. Callable[[Iterator[Doc], Iterator[List[Doc]]]] |
component | Component name to identify the layer with the model to pretrain. Defaults to "tok2vec". str |
layer | The specific layer of the model to pretrain. If empty, the whole model will be used. str |
initialize 初始化部分
此配置块允许您定义用于初始化管道的资源。
它由Language.initialize使用,通常
在训练前调用(但不在运行时调用)。该部分允许您
指定本地文件路径或自定义函数来加载数据资源,
而无需在加载训练好的管道时在运行时要求它们。
另请参阅关于
配置生命周期和
自定义初始化的使用指南。
| 名称 | 描述 |
|---|---|
after_init | Optional callback to modify the nlp object after initialization. Optional[Callable[[Language],Language]] |
before_init | Optional callback to modify the nlp object before initialization. Optional[Callable[[Language],Language]] |
components | Additional arguments passed to the initialize method of a pipeline component, keyed by component name. If type annotations are available on the method, the config will be validated against them. The initialize methods will always receive the get_examples callback and the current nlp object. Dict[str, Dict[str, Any]] |
init_tok2vec | Optional path to pretrained tok2vec weights created with spacy pretrain. Defaults to variable ${paths.init_tok2vec}. Ignored when actually running pretraining, as you’re creating the file to be used later. Optional[str] |
lookups | Additional lexeme and vocab data from spacy-lookups-data. Defaults to null. Optional[Lookups] |
tokenizer | Additional arguments passed to the initialize method of the specified tokenizer. Can be used for languages like Chinese that depend on dictionaries or trained models for tokenization. If type annotations are available on the method, the config will be validated against them. The initialize method will always receive the get_examples callback and the current nlp object. Dict[str, Any] |
vectors | Name or path of pipeline containing pretrained word vectors to use, e.g. created with init vectors. Defaults to null. Optional[str] |
vocab_data | Path to JSONL-formatted vocabulary file to initialize vocabulary. Optional[str] |
训练数据
二进制训练格式 v3.0
spaCy v3.0使用的主要数据格式是通过序列化DocBin创建的二进制格式,它表示一组Doc对象。这意味着您可以使用与其输出相同的格式来训练spaCy管道:带标注的Doc对象。这种二进制格式在存储效率方面极为高效,特别是在将多个文档打包在一起时。
通常,这些二进制文件的扩展名是.spacy,它们用作指定训练语料库以及spaCy的CLI train命令的输入格式。内置的convert命令可帮助您将spaCy之前的JSON格式转换为新的二进制格式。它还支持转换通用依存语料库使用的.conllu格式。
请注意,虽然这是用于保存训练数据的格式,但您无需理解其内部细节即可使用或创建训练数据。请参阅准备训练数据部分。
JSON训练格式 已弃用
示例结构
以下是一个关于依存关系、词性标注和命名实体的示例,取自宾州树库中的英文《华尔街日报》部分:
explosion/spaCy/v2.3.x/examples/training/training-data.json
创建训练样本的标注格式
一个Example对象保存了一个训练实例的信息。它存储了两个Doc对象:一个用于保存黄金标准参考数据,另一个用于保存管道的预测结果。可以通过使用Example.from_dict方法,传入一个参考Doc和包含黄金标准标注的字典来创建Example对象。
| 名称 | 描述 |
|---|---|
text | Raw text. str |
words | List of gold-standard tokens. List[str] |
lemmas | List of lemmas. List[str] |
spaces | List of boolean values indicating whether the corresponding tokens is followed by a space or not. List[bool] |
tags | List of fine-grained POS tags. List[str] |
pos | List of coarse-grained POS tags. List[str] |
morphs | List of morphological features. List[str] |
sent_starts | List of boolean values indicating whether each token is the first of a sentence or not. List[bool] |
deps | List of string values indicating the dependency relation of a token to its head. List[str] |
heads | List of integer values indicating the dependency head of each token, referring to the absolute index of each token in the text. List[int] |
entities | Option 1: List of BILUO tags per token of the format "{action}-{label}", or None for unannotated tokens. List[str] |
entities | Option 2: List of (start_char, end_char, label) tuples defining all entities in the text. List[Tuple[int, int, str]] |
cats | Dictionary of label/value pairs indicating how relevant a certain text category is for the text. Dict[str, float] |
links | Dictionary of offset/dict pairs defining named entity links. The character offsets are linked to a dictionary of relevant knowledge base IDs. Dict[Tuple[int, int], Dict] |
spans | Dictionary of spans_key/List[Tuple] pairs defining the spans for each spans key as (start_char, end_char, label, kb_id) tuples. Dict[str, List[Tuple[int, int, str, str]] |
示例
词汇的词汇数据
该数据文件可通过训练配置中[initialize]区块的vocab_data设置提供,用于预定义词汇数据以初始化nlp对象的词汇表。文件应每行包含一个词汇条目。首行定义语言和词汇表设置,其余行应为描述单个词位的JSON对象。这些词汇属性随后将被设置为spaCy的Lexeme对象上的属性。
第一行
条目结构
以下是英语训练数据中出现频率最高的20个词素的示例:
explosion/spaCy/master/extra/example_data/vocab-data.jsonl
Pipeline元数据
管道元数据以meta.json文件形式提供,当您将nlp对象保存到磁盘时会自动导出。其内容可通过nlp.meta访问。
| 名称 | 描述 |
|---|---|
lang | Pipeline language ISO code. Defaults to "en". str |
name | Pipeline name, e.g. "core_web_sm". The final package name will be {lang}_{name}. Defaults to "pipeline". str |
version | Pipeline version. Will be used to version a Python package created with spacy package. Defaults to "0.0.0". str |
spacy_version | spaCy version range the package is compatible with. Defaults to the spaCy version used to create the pipeline, up to next minor version, which is the default compatibility for the available trained pipelines. For instance, a pipeline trained with v3.0.0 will have the version range ">=3.0.0,<3.1.0". str |
parent_package | Name of the spaCy package. Typically "spacy" or "spacy_nightly". Defaults to "spacy". str |
requirements | Python package requirements that the pipeline depends on. Will be used for the Python package setup in spacy package. Should be a list of package names with optional version specifiers, just like you’d define them in a setup.cfg or requirements.txt. Defaults to []. List[str] |
description | Pipeline description. Also used for Python package. Defaults to "". str |
author | Pipeline author name. Also used for Python package. Defaults to "". str |
email | Pipeline author email. Also used for Python package. Defaults to "". str |
url | Pipeline author URL. Also used for Python package. Defaults to "". str |
license | Pipeline license. Also used for Python package. Defaults to "". str |
sources | Data sources used to train the pipeline. Typically a list of dicts with the keys "name", "url", "author" and "license". See here for examples. Defaults to None. Optional[List[Dict[str, str]]] |
vectors | Information about the word vectors included with the pipeline. Typically a dict with the keys "width", "vectors" (number of vectors), "keys" and "name". Dict[str, Any] |
pipeline | Names of pipeline component names, in order. Corresponds to nlp.pipe_names. Only exists for reference and is not used to create the components. This information is defined in the config.cfg. Defaults to []. List[str] |
labels | Label schemes of the trained pipeline components, keyed by component name. Corresponds to nlp.pipe_labels. See here for examples. Defaults to {}. Dict[str, Dict[str, List[str]]] |
performance | Training accuracy, added automatically by spacy train. Dictionary of score names mapped to scores. Defaults to {}. Dict[str, Union[float, Dict[str, float]]] |
speed | Inference speed, added automatically by spacy train. Typically a dictionary with the keys "cpu", "gpu" and "nwords" (words per second). Defaults to {}. Dict[str, Optional[Union[float, str]]] |
spacy_git_version v3.0 | Git commit of spacy used to create pipeline. str |
| other | Any other custom meta information you want to add. The data is preserved in nlp.meta. Any |