收到这个启发:
https://www.bilibili.com/video/BV1Cw411y7gs/?p=5&spm_id_from=pageDriver&vd_source=d68ed178f151e80fea1e02efd205802c
原来的模型也可以自己单机低成本调试.
这个是调试代码
from transformers.models .llama import LlamaModel,LlamaConfig import torch def run (): llamaconfig= LlamaConfig(vocab_size=32000, hidden_size=4096//2, intermediate_size=1108//2, num_hidden_layers=32//2, num_attention_heads=32//2,max_position_embeddings=2048//2) llamamodel = LlamaModel(config=llamaconfig) inputs_ids = torch.randint(low=0,high=llamaconfig.vocab_size, size=(4,30)) res = llamamodel(inputs_ids) print(res) run()
下面就把代码debug时候的核心部分写下来,
标签:num,llama,torch,llamaconfig,源码,阅读,size From: https://www.cnblogs.com/zhangbo2008/p/17875889.html