1、为什么要进行大模型微调
微调的定义
大模型微调是利用特定领域的数据集对已预训练的大模型进行进一步训练的过程。它旨在优化模型在特定任务上的性能,使模型能够更好地适应和完成特定领域的任务。
微调的核心原因
定制化功能:微调的核心原因是赋予大模型更加定制化的功能。通用大模型虽然强大,但在特定领域可能表现不佳。通过微调,可以使模型更好地适应特定领域的需求和特征。
领域知识学习:通过引入特定领域的数据集进行微调,大模型可以学习该领域的知识和语言模式。这有助于模型在特定任务上取得更好的性能。
2、unsloth简介
unsloth微调Llama 3, Mistral和Gemma速度快2-5倍,内存减少80% !unsloth是一个开源项目,它可以比HuggingFace快2-5倍地微调Llama 3、Mistral和Gemma语言模型,同时内存消耗减少80%。
github:https://github.com/unslothai/unsloth
3、实战教学
1、硬件环境
我选择的是在Autodl租用的显卡进行的微调。好处:1、省时省力,不需要自己搭建硬件环境。2、价格便宜,初学阶段使用3080*2,20G显存完全够用,每小时只要1.08元。无卡开机只要0.01元。
2、软件环境
采用的是Conda 进行安装
conda create --name unsloth_env python=3.10
conda activate unsloth_env
conda install pytorch-cuda=12.1 pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps trl peft accelerate bitsandbytes
git拉取代码可能拉不下来,使用autodl提供的学术资源加速。拉取速度不快,多等会。
source /etc/network_turbo
3、运行代码
由于网络原因,可能无法访问huggingface上的资源,可以使用国内的镜像站。https://hf-mirror.com/
pip install -U huggingface_hub
export HF_ENDPOINT=https://hf-mirror.com
下面的代码需要保存成.py文件,上传到服务器root目录下。使用python test-unlora.py运行。
微调前测试模型
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/llama-3-8b-bnb-4bit",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
token = "https://hf-mirror.com"
)
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
alpaca_prompt.format(
"海绵宝宝的书法是不是叫做海绵体",
"",
"",
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
EOS_TOKEN = tokenizer.eos_token # 必须添加 EOS_TOKEN
def formatting_prompts_func(examples):
instructions = examples["instruction"]
inputs = examples["input"]
outputs = examples["output"]
texts = []
for instruction, input, output in zip(instructions, inputs, outputs):
# 必须添加EOS_TOKEN,否则生成将永无止境
text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN
texts.append(text)
return { "text" : texts, }
pass
上面的代码会拉取镜像,估计要10分钟左右。上述返回没问题,则可以进行微调。
微调系统盘50G空间不够,需要购买数据盘50-100G。
微调模型
import os
from unsloth import FastLanguageModel
import torch
from trl import SFTTrainer
from transformers import TrainingArguments
from datasets import load_dataset
#加载模型
max_seq_length = 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/llama-3-8b-bnb-4bit",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
token = "https://hf-mirror.com"
)
#准备训练数据
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
EOS_TOKEN = tokenizer.eos_token # 必须添加 EOS_TOKEN
def formatting_prompts_func(examples):
instructions = examples["instruction"]
inputs = examples["input"]
outputs = examples["output"]
texts = []
for instruction, input, output in zip(instructions, inputs, outputs):
# 必须添加EOS_TOKEN,否则无限生成
text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN
texts.append(text)
return { "text" : texts, }
pass
#hugging face数据集路径
dataset = load_dataset("kigner/ruozhiba-llama3", split = "train")
dataset = dataset.map(formatting_prompts_func, batched = True,)
#设置训练参数
model = FastLanguageModel.get_peft_model(
model,
r = 16,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = True,
random_state = 3407,
max_seq_length = max_seq_length,
use_rslora = False,
loftq_config = None,
)
trainer = SFTTrainer(
model = model,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
tokenizer = tokenizer,
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 10,
max_steps = 60,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
output_dir = "outputs",
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
),
)
#开始训练
trainer.train()
#保存微调模型
model.save_pretrained("lora_model")
#合并模型,保存为16位hf
model.save_pretrained_merged("outputs", tokenizer, save_method = "merged_16bit",)
#合并模型,并量化成4位gguf
#model.save_pretrained_gguf("model", tokenizer, quantization_method = "q4_k_m")
如果需要量化成4位,则解开最后注释。量化过需要磁盘空间比较大。
标签:seq,max,微调,llama3,unsloth,全过程,text,model From: https://www.cnblogs.com/shanren/p/18251730