近日,H2O.ai 宣布推出两款新型视觉语言模型,旨在提升文档分析和光学字符识别(OCR)任务的效率。这两款模型分别是 H2OVL Mississippi-2B 和 H2OVL-Mississippi-0.8B,它们在性能上与大型科技公司的模型相比,展现出令人瞩目的竞争力,可能为处理文档繁重工作流的企业提供更为高效的解决方案。
H2OVL Mississippi-0.8B 模型虽然只有8亿参数,却在 OCRBench 文本识别任务中超越了所有其他模型,包括那些拥有数十亿参数的竞争对手。而20亿个参数的 H2OVL Mississippi-2B 模型则在多项视觉语言基准测试中表现不俗。
H2O.ai 的创始人兼首席执行官 Sri Ambati 在接受 采访时表示:“我们设计的 H2OVL Mississippi 模型旨在成为高性能且具成本效益的解决方案,为各行各业提供 AI 驱动的 OCR、视觉理解和文档 AI。”
他强调,这些模型可在各种环境中高效运行,同时能够根据特定领域的需求进行微调,从而帮助企业在降低成本的同时提升效率。
H2O.ai 将这两款新模型免费发布在 Hugging Face 平台上,允许开发者和企业根据自身需求对模型进行修改和适应。这一举措不仅扩大了 H2O.ai 的用户基础,也为希望采用文档 AI 解决方案的企业提供了更多选择。
同时,Ambati 也提到,小型、专用模型的经济优势不容忽视。“我们的生成预训练变换器模型基于与客户的深入合作,旨在从企业文档中提取出有意义的信息。” 他指出,H2O.ai 的模型能在资源占用更少的情况下,提供高效的文档处理能力,尤其是在面对质量较差的扫描件、难以辨认的手写体或大幅修改的文档时,表现更为出色。
模型入口:
H2OVL-Mississippi-0.8B:https://huggingface.co/h2oai/h2ovl-mississippi-800m
H2OVL Mississippi-2B:https://huggingface.co/h2oai/h2ovl-mississippi-2b
Code
2B
pip install transformers torch torchvision einops timm peft sentencepiece
如果您有安培图形处理器,请安装flash-attention,以加快推理速度:
pip install flash_attn
import torch
from transformers import AutoModel, AutoTokenizer
# Set up the model and tokenizer
model_path = 'h2oai/h2ovl-mississippi-2b'
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# Example for single image
image_file = './examples/image1.jpg'
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, image_file, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# Example for multiple images - multiround conversation
image_files = ['./examples/image1.jpg', './examples/image2.jpg']
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the Image-1 and Image-2 in detail.'
response, history = model.chat(tokenizer, image_files, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, image_files, question, generation_config=generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
0.8B
import torch
from transformers import AutoConfig, AutoModel, AutoTokenizer
# Set up the model and tokenizer
model_path = 'h2oai/h2ovl-mississippi-800m'
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
# config.llm_config._attn_implementation = 'flash_attention_2' # 安倍架构以上显卡才可用
config.llm_config._attn_implementation = 'eager' # 通用
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
config=config,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
generation_config = dict(max_new_tokens=2048, do_sample=True)
# pure-text conversation
question = 'Hello, how are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# Example for single image
image_file = './examples/image.jpg'
question = '<image>\nRead the text in the image.'
response, history = model.chat(tokenizer, image_file, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
标签:小而,ai,True,image,question,AI,model,config,history
From: https://blog.csdn.net/weixin_41446370/article/details/143080004