Sentence Transformer
库升级到了V3,其中对模型训练部分做了优化,使得模型训练和微调更加简单了,跟着官方教程走了一遍,顺利完成向量模型的微调,以下是对官方教程的精炼和总结。
一 所需组件
使用Sentence Transformer
库进行向量模型的微调需要如下的组件:
- 数据数据: 用于训练和评估的数据。
- 损失函数 : 一个量化模型性能并指导优化过程的函数。
- 训练参数 (可选): 影响训练性能和跟踪/调试的参数。
- 评估器 (可选): 一个在训练前、中或后评估模型的工具。
- 训练器 : 将模型、数据集、损失函数和其他组件整合在一起进行训练。
二 数据集
大部分微调用到的数据都是本地的数据集,因此这里只提供本地数据的处理方法。如用其他在线数据可参考相对应的API。
1 数据类型
常见的数据类型为json、csv、parquet,可以使用load_dataset
进行加载:
from datasets import load_dataset
csv_dataset = load_dataset("csv", data_files="my_file.csv")
json_dataset = load_dataset("json", data_files="my_file.json")
parquet_dataset = load_dataset("parquet", data_files="my_file.parquet")
2 数据格式
数据格式需要与损失函数相匹配。如果损失函数需要计算三元组,则数据集的格式为['anchor', 'positive', 'negative']
,且顺序不能颠倒。如果损失函数计算的是句子对的相似度或者标签类别,则数据集中需要包含['label']
或者['score']
,其余列都会作为损失函数的输入。常见的数据格式和损失函数选择见表1。
三 损失函数
从链接整理了一些常见的数据格式和匹配的损失函数
Inputs | Labels | Appropriate Loss Functions |
---|---|---|
(sentence_A, sentence_B) pairs | class | SoftmaxLoss |
(anchor, positive) pairs | none | MultipleNegativesRankingLoss |
(anchor, positive/negative) pairs | 1 if positive, 0 if negative | ContrastiveLoss / OnlineContrastiveLoss |
(sentence_A, sentence_B) pairs | float similarity score | CoSENTLoss / AnglELoss / CosineSimilarityLoss |
(anchor, positive, negative) triplets | none | MultipleNegativesRankingLoss / TripletLoss |
表1 常见的数据格式和损失函数
四 训练参数
配置训练参数主要是用于提升模型的训练效果,同时可以显示训练过程的进度或者其他参数信息,方便调试。
1 影响训练效果的参数
learning_rate | lr_scheduler_type | warmup_ratio | num_train_epochs |
max_steps | per_device_train_batch_size | per_device_evak_batch_size | auto_find_batch_size |
fp16 | bf16 | gradient_accumulation_steps | gradient_checkpointing |
eval_accmulation_steps | optim | batch_sampler | multi_dataset_batch_sampler |
2 观察训练过程的参数
eval_strategy | eval_steps | save_strategy | save_steps |
save_total_limit | load_best_model_at_end | report_to log_eval | log_eval |
logging_steps | push_to_hub | hub_model_id | hub_strategy |
hub_private_repo |
五 评估器
评估器用于评估模型训练过程中的损失。同损失函数的选择一样,它也需要与数据格式相匹配,以下是评估器的选择依据。
Evaluator | Required Data |
BinaryClassificationEvaluator | Pairs with class labels |
EmbeddingSimilarityEvaluator | Pairs with similarity scores |
InformationRetrievalEvaluator | Queries (qid => question), Corpus (cid => document), and relevant documents (qid => set[cid]) |
MSEEvaluator | Source sentences to embed with a teacher model and target sentences to embed with the student model. Can be the same texts. |
ParaphraseMiningEvaluator | Mapping of IDs to sentences & pairs with IDs of duplicate sentences. |
RerankingEvaluator | List of {'query': '...', 'positive': [...], 'negative': [...]} dictionaries. |
TranslationEvaluator | Pairs of sentences in two separate languages. |
TripletEvaluator | (anchor, positive, negative) pairs. |
六 训练器
训练器的作用是把先前的组件组合在一起使用。我们仅需要指定模型、训练数据、损失函数、训练参数(可选)、评估器(可选),就可以开始模型的训练。
from datasets import load_dataset
from sentence_transformers import (
SentenceTransformer,
SentenceTransformerTrainer,
SentenceTransformerTrainingArguments,
SentenceTransformerModelCardData,
)
from sentence_transformers.losses import MultipleNegativesRankingLoss
from sentence_transformers.training_args import BatchSamplers
from sentence_transformers.evaluation import TripletEvaluator
# 1. Load a model to finetune with 2. (Optional) model card data
model = SentenceTransformer(
"microsoft/mpnet-base",
model_card_data=SentenceTransformerModelCardData(
language="en",
license="apache-2.0",
model_name="MPNet base trained on AllNLI triplets",
)
)
# 3. Load a dataset to finetune on
dataset = load_dataset("sentence-transformers/all-nli", "triplet")
train_dataset = dataset["train"].select(range(100_000))
eval_dataset = dataset["dev"]
test_dataset = dataset["test"]
# 4. Define a loss function
loss = MultipleNegativesRankingLoss(model)
# 5. (Optional) Specify training arguments
args = SentenceTransformerTrainingArguments(
# Required parameter:
output_dir="models/mpnet-base-all-nli-triplet",
# Optional training parameters:
num_train_epochs=1,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
learning_rate=2e-5,
warmup_ratio=0.1,
fp16=True, # Set to False if you get an error that your GPU can't run on FP16
bf16=False, # Set to True if you have a GPU that supports BF16
batch_sampler=BatchSamplers.NO_DUPLICATES, # MultipleNegativesRankingLoss benefits from no duplicate samples in a batch
# Optional tracking/debugging parameters:
eval_strategy="steps",
eval_steps=100,
save_strategy="steps",
save_steps=100,
save_total_limit=2,
logging_steps=100,
run_name="mpnet-base-all-nli-triplet", # Will be used in W&B if `wandb` is installed
)
# 6. (Optional) Create an evaluator & evaluate the base model
dev_evaluator = TripletEvaluator(
anchors=eval_dataset["anchor"],
positives=eval_dataset["positive"],
negatives=eval_dataset["negative"],
name="all-nli-dev",
)
dev_evaluator(model)
# 7. Create a trainer & train
trainer = SentenceTransformerTrainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss=loss,
evaluator=dev_evaluator,
)
trainer.train()
# (Optional) Evaluate the trained model on the test set
test_evaluator = TripletEvaluator(
anchors=test_dataset["anchor"],
positives=test_dataset["positive"],
negatives=test_dataset["negative"],
name="all-nli-test",
)
test_evaluator(model)
# 8. Save the trained model
model.save_pretrained("models/mpnet-base-all-nli-triplet/final")
标签:Transformer,训练,Sentence,dataset,train,steps,eval,model,向量
From: https://www.cnblogs.com/deeplearningmachine/p/18288618