首页 > 其他分享 >教你如何基于MindSpore进行ChatGLM微调

教你如何基于MindSpore进行ChatGLM微调

时间:2023-10-16 11:25:39浏览次数:49  
标签:6b False glm True 微调 ChatGLM type parallel MindSpore

本文分享自华为云社区《基于MindSpore的ChatGLM微调》,作者: JeffDing 。

基于MindSpore的ChatGLM微调

克隆Hugging Face模型

克隆chatglm-6b代码仓,下载分布式的模型文件

git lfs install
git clone https://huggingface.co/THUDM/chatglm-6b

准备环境

安装Transformer

pip install transformers

执行 python 脚本,合并模型权重。

from transformers import AutoModel
import torch as pt

pt_ckpt_path="./models/chatglm-6b"
model = AutoModel.from_pretrained(pt_ckpt_path, trust_remote_code=True).half()
pt_pth_path = "models/mindspore/pt_glm_6b.pth"
pt.save(model.state_dict(), pt_pth_path)

执行转换脚本,得到转换后的输出文件ms_glm_6b.ckpt

python mindformers/models/glm/convert_weight.py --pt_ckpt_path /home/ma-user/work/models/mindspore/pt_glm_6b.pth --ms_ckpt_path ../models/mindspore/ms_glm_6b.ckpt

注意可能会遇到以下错误:

执行转换脚本,得到转换后的输出文件ms_glm_6b.ckpt

解决方法:

export LD_PRELOAD=$LD_PRELOAD:/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/torch/lib/libgomp-d22c30c5.so.1 

原理:找到torch中的libgomp-d22c30c5.so.1 然后赋值给LD_PRELOAD环境变量,这个报错好像只有ARM平台会有

微调

数据处理

ADGEN 数据集任务为根据输入(content)生成一段广告词(summary)。数据集可选离线生成 Mindrecord 或者实时生成两种方式,两种方式选其一即可。

下载地址:https://cloud.tsinghua.edu.cn/f/b3f119a008264b1cabd1/?dl=1

将任务配置文件 configs/glm/run_glm_6b_*.yaml中的 ==== dataset config ==== 部分中的 dataset_dir 指向 *.json文件,vocab_file 指向词表文件

LoRA低参微调

run_mindformers脚本启动LoRA低参微调

使用LoRA算法进行低参微调时,使用 configs/glm/run_glm_6b_lora.yaml 配置文件,该配置文件包含了lora低参微调算法所需的配置项

修改数据集/模型权重配置路径

数据集:修改 mindformers/configs/glm/run_glm_6b_lora.yaml 脚本中train_dataset 的 dataset_dir 为前文生成的数据集路径。

加载预训练模型权重:修改 mindformers/configs/glm/run_glm_6b_lora.yaml 脚本中的 load_checkpoint 为预训练模型权重路径。

** 安装jieba**

pip install -r requirements.txt

启动LoRA低参微调脚本(1卡):

python run_mindformer.py --config=./configs/glm/run_glm_6b_lora.yaml --use_parallel=False --run_mode=finetune

附录run_glm_6b_lora.yaml

seed: 0
run_mode: 'finetune'
load_checkpoint: "/home/ma-user/work/models/mindspore/ms_glm_6b.ckpt"
src_strategy_path_or_dir: ''
auto_trans_ckpt: False  # If true, auto transform load_checkpoint to load in distributed model
only_save_strategy: False
resume_training: False
output_dir: './output'  # 当前不支持自定义修改,请勿修改该默认值

# ==== context config ====
context:
  mode: 0 #0--Graph Mode; 1--Pynative Mode
  device_target: "Ascend"
  enable_graph_kernel: False
  graph_kernel_flags: "--disable_expand_ops=Softmax,Dropout --enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true"
  max_call_depth: 10000
  max_device_memory: "30GB"
  save_graphs: False
  device_id: 0

# aicc
remote_save_url: "Please input obs url on AICC platform."

# ==== model config ====
model:
  model_config:
    type: GLMConfig
    vocab_size: 130528
    hidden_size: 4096
    num_layers: 28
    num_heads: 32
    inner_hidden_size: 16384
    seq_length: 512  # 推理时, 输入pad到的长度, model里的最大句长
    embedding_dropout_prob: 0.0
    attention_dropout_rate: 0.0
    hidden_dropout_rate: 0.0
    hidden_size_per_attention_head: # default "None" means hidden-size/num-attention-heads.
    layernorm_order: "post"
    layernorm_epsilon: 1.0e-5
    use_final_layernorm: True
    use_past: False
    activation_func: 'GELU'
    position_encoding_2d: True
    param_init_type: "float16"
    layernorm_compute_type: "float32"
    softmax_compute_type: "float32"
    compute_dtype: "float16"
    bos_token_id: 130004
    eos_token_id: 130005
    mask_token_id: 130000
    gmask_token_id: 130001
    pad_token_id: 3
    max_decode_length: 2048  # The maximum length of the generated words.
    is_enhanced_encoder: True
    is_sample_acceleration: False
    checkpoint_name_or_path: "glm_6b_lora"
    top_k: 1
    top_p: 1
    repetition_penalty: 1
    do_sample: True
    pet_config:
      pet_type: lora
      lora_rank: 8
      lora_alpha: 32
      lora_dropout: 0.1
  arch:
    type: GLMForPreTrainingWithLora

trainer:
  type: CausalLanguageModelingTrainer
  model_name: 'glm_6b_lora'
# if True, do evaluate during the training process. if false, do nothing.
# note that the task trainer should support _evaluate_in_training function.
do_eval: False

metric:
  type: ADGENMetric

processor:
  return_tensors: ms
  tokenizer:
    type: ChatGLMTokenizer
    bos_token: '<sop>'
    eos_token: '<eop>'
    end_token: '</s>'
    mask_token: '[MASK]'
    gmask_token: '[gMASK]'
    pad_token: '<pad>'
    unk_token: '<unk>'
  type: GLMProcessor

# ==== dataset config ====
train_dataset: &train_dataset
  data_loader:
    type: ADGenDataLoader
    dataset_dir: "/home/ma-user/work/data/AdvertiseGen/train.json"
    shuffle: True
    phase: "train"
    origin_columns: ["content", "summary"]
  tokenizer:
    type: ChatGLMTokenizer
    vocab_file: "/home/ma-user/work/data/AdvertiseGen/ice_text.model"
  input_columns: ["input_ids", "labels", "position_ids", "attention_mask"]
  max_source_length: 64
  max_target_length: 64
  ignore_pad_token_for_loss: True
  num_parallel_workers: 8
  python_multiprocessing: False
  drop_remainder: True
  batch_size: 1
  repeat: 1
  numa_enable: False
  prefetch_size: 1
  seed: 0

train_dataset_task:
  type: KeyWordGenDataset
  dataset_config: *train_dataset

eval_dataset: &eval_dataset
  data_loader:
    type: ADGenDataLoader
    dataset_dir: "/home/ma-usr/work/data/AdvertiseGen/dev.json"
    shuffle: False
    phase: "eval"
    origin_columns: ["content", "summary"]
  tokenizer:
    type: ChatGLMTokenizer
    vocab_file: "/home/ma-usr/work/data/AdvertiseGen/ice_text.model"
  max_source_length: 256
  max_target_length: 256
  ignore_pad_token_for_loss: True
  input_columns: ["input_ids", "labels"]
  num_parallel_workers: 8
  python_multiprocessing: False
  drop_remainder: True
  batch_size: 1
  repeat: 1
  numa_enable: False
  prefetch_size: 1
  seed: 0

eval_dataset_task:
  type: KeyWordGenDataset
  dataset_config: *eval_dataset

# ==== runner config ====
runner_config:
  epochs: 1
  batch_size: 8
  sink_mode: True
  sink_size: 4

runner_wrapper:
  type: MFTrainOneStepCell
  scale_sense:
    type: DynamicLossScaleUpdateCell
    loss_scale_value: 4294967296
    scale_factor: 2
    scale_window: 1000
  use_clip_grad: True

# lr sechdule
lr_schedule:
  type: polynomial
  learning_rate: 5.e-5
  lr_end: 1.e-6
  warmup_steps: 2000
  total_steps: -1 # -1 means it will load the total steps of the dataset

# optimizer
optimizer:
  type: FusedAdamWeightDecay
  beta1: 0.9
  beta2: 0.95
  eps: 1.e-8
  weight_decay: 0.1
layer_scale: False
lr_scale: False

# parallel config
use_parallel: False
parallel:
  parallel_mode: 0 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel
  gradients_mean: False
  loss_repeated_mean: True
  enable_alltoall: False
  full_batch: True
  search_mode: "sharding_propagation"
  enable_parallel_optimizer: False  # optimizer shard
  strategy_ckpt_save_file: "./ckpt_strategy.ckpt"
parallel_config:
  data_parallel: 1
  model_parallel: 1
  pipeline_stage: 1
  expert_parallel: 1
  optimizer_shard: False  # optimizer shard
  micro_batch_num: 1
  vocab_emb_dp: True
  gradient_aggregation_group: 4
micro_batch_interleave_num: 1

# moe
moe_config:
  expert_num: 1
  capacity_factor: 1.05
  aux_loss_factor: 0.05
  num_experts_chosen: 1

# recompute
recompute_config:
  recompute: False
  parallel_optimizer_comm_recompute: False
  mp_comm_recompute: True
  recompute_slice_activation: False

# autotune
auto_tune: False
filepath_prefix: './autotune'
autotune_per_step: 10

# profile
profile: False
profile_start_step: 1
profile_stop_step: 10
init_start_profile: True
profile_communication: True
profile_memory: True

# callbacks
callbacks:
  - type: MFLossMonitor
  - type: CheckpointMointor
    prefix: "glm-6b-lora"
    save_checkpoint_steps: 500
    keep_checkpoint_max: 2
    integrated_save: False
    async_save: False
  - type: ObsMonitor
    keep_last: False
eval_callbacks:
  - type: ObsMonitor
    keep_last: False

点击下方,第一时间了解华为云新鲜技术~

标签:6b,False,glm,True,微调,ChatGLM,type,parallel,MindSpore
From: https://www.cnblogs.com/huaweiyun/p/17766924.html

相关文章

  • GPU实验室-在阿里云云上部署ChatGLM2-6B大模型
    实验室地址:https://developer.aliyun.com/adc/scenario/f3dc63dc55a543c3884b8dbd292adcd5一、先买机器并开通对应安全组8501端口规格族:GPU计算型gn6i实例规格:ecs.gn6i-c4g1.xlarge安全组新增规则入方向端口范围:8501/8501授权对象:0.0.0.0/0二、最好是安装系统的时候把安装nvidi......
  • 基于 P-Tuning v2 进行 ChatGLM2-6B 微调实践
    微调类型简介1.SFT监督微调:适用于在源任务中具有较高性能的模型进行微调,学习率较小。常见任务包括中文实体识别、语言模型训练、UIE模型微调。优点是可以快速适应目标任务,但缺点是可能需要较长的训练时间和大量数据。2.LoRA微调:通过高阶矩阵秩的分解减少微调参数量,不改变预训......
  • 【开源】给ChatGLM写个,Java对接的SDK
    作者:小傅哥-百度搜小傅哥bugstack博客:bugstack.cn沉淀、分享、成长,让自己和他人都能有所收获!......
  • Generative AI 新世界 | 文生图领域动手实践:预训练模型的微调
    在上期文章,我们探讨了预训练模型的部署和推理,包括运行环境准备、角色权限配置、支持的主要推理参数、图像的压缩输出、提示工程(PromptEngineering)、反向提示(NegativePrompting)等内容。亚马逊云科技开发者社区为开发者们提供全球的开发技术资源。这里有技术文档、开发案......
  • Langchain-Chatchat项目:1.1-ChatGLM2项目整体介绍
      ChatGLM2-6B是开源中英双语对话模型ChatGLM-6B的第2代版本,引入新的特性包括更长的上下文(基于FlashAttention技术,将基座模型的上下文长度由ChatGLM-6B的2K扩展到了32K,并在对话阶段使用8K的上下文长度训练);更高效的推理(基于Multi-QueryAttention技术,ChatGLM2-6B有更高效的推理......
  • Llama2-Chinese项目:3.2-LoRA微调和模型量化
      提供LoRA微调和全量参数微调代码,训练数据为data/train_sft.csv,验证数据为data/dev_sft.csv,数据格式为"<s>Human:"+问题+"\n</s><s>Assistant:"+答案。本文主要介绍Llama-2-7b模型LoRA微调以及4bit量化的实践过程。1.LoRA微调脚本  LoRA微调脚本train/sft/finetune_lora......
  • Llama2-Chinese项目:3.1-全量参数微调
      提供LoRA微调和全量参数微调代码,训练数据为data/train_sft.csv,验证数据为data/dev_sft.csv,数据格式如下所示:"<s>Human: "+问题+"\n</s><s>Assistant: "+答案  举个例子,如下所示:<s>Human: 用一句话描述地球为什么是独一无二的。</s><s>Assistant: 因为地球是目前为止......
  • ChatGLM2
    下载chatglm2-6bprint('开始加载分词器tokenizer...')tokenizer=AutoTokenizer.from_pretrained("THUDM/chatglm2-6b",trust_remote_code=True)print('开始加载语言模型model...')model=AutoModel.from_pretrained("THUDM/chatglm......
  • ChatGLM-6B-PT微调
    目录开发环境ChatGLM2-6B源码下载模型安装依赖下载ADGEN数据集微调前修改训练步数微调后开发环境矩池云https://www.matpool.com/host-market/gpuChatGLM2-6B源码https://github.com/THUDM/ChatGLM2-6Bgitclonehttps://github.com/THUDM/ChatGLM2-6B.git下载模型......
  • 本地部署 Langchain-Chatchat & ChatGLM
    一、模型&环境介绍1.ChatGLMgithub地址:https://github.com/THUDM模型地址:https://huggingface.co/THUDM2.m3e模型地址:https://huggingface.co/moka-ai/m3e-base/3.text2vec模型地址:https://huggingface.co/GanymedeNil/text2vec-large-chinese/4.Langchain-Cha......