首页 > 编程问答 >将 PyTorch ONNX 模型转换为 TensorRT 引擎 - Jetson Orin Nano

将 PyTorch ONNX 模型转换为 TensorRT 引擎 - Jetson Orin Nano

时间:2024-07-25 09:18:28浏览次数:7  
标签:python pytorch onnx tensorrt

我正在尝试从 Jetson Orin Nano 上的 ViT-B/32 UNICOM 存储库转换 Vision Transformer 模型。该模型的 Vision Transformer 类和源代码在 此处

我使用以下代码将模型转换为 ONNX:

import torch
import onnx
import onnxruntime

from unicom.vision_transformer import build_model

if __name__ == '__main__':
    model_name = "ViT-B/32"
    model_name_fp16 = "FP16-ViT-B-32"
    onnx_model_path = f"{model_name_fp16}.onnx"

    model = build_model(model_name)
    model.eval()
    model = model.to('cuda')
    torch_input = torch.randn(1, 3, 224, 224).to('cuda')

    onnx_program = torch.onnx.dynamo_export(model, torch_input)
    onnx_program.save(onnx_model_path)

    onnx_model = onnx.load(onnx_model_path)
    onnx.checker.check_model(onnx_model_path)

然后使用以下命令行将 ONNX 模型转换为 TensorRT引擎:

/usr/src/tensorrt/bin/trtexec --onnx=FP16-ViT-B-32.onnx --saveEngine=FP16-ViT-B-32.trt --workspace=1024 --fp16

这会导致以下错误:

[W] --workspace flag has been deprecated by --memPoolSize flag.
[I] === Model Options ===
[I] Format: ONNX
[I] Model: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx
[I] Output:
[I] === Build Options ===
[I] Max batch: explicit batch
[I] Memory Pools: workspace: 1024 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[I] minTiming: 1
[I] avgTiming: 8
[I] Precision: FP32+FP16
[I] LayerPrecisions:
[I] Layer Device Types:
[I] Calibration:
[I] Refit: Disabled
[I] Version Compatible: Disabled
[I] ONNX Native InstanceNorm: Disabled
[I] TensorRT runtime: full
[I] Lean DLL Path:
[I] Tempfile Controls: { in_memory: allow, temporary: allow }
[I] Exclude Lean Runtime: Disabled
[I] Sparsity: Disabled
[I] Safe mode: Disabled
[I] Build DLA standalone loadable: Disabled
[I] Allow GPU fallback for DLA: Disabled
[I] DirectIO mode: Disabled
[I] Restricted mode: Disabled
[I] Skip inference: Disabled
[I] Save engine: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.trt
[I] Load engine:
[I] Profiling verbosity: 0
[I] Tactic sources: Using default tactic sources
[I] timingCacheMode: local
[I] timingCacheFile:
[I] Heuristic: Disabled
[I] Preview Features: Use default preview flags.
[I] MaxAuxStreams: -1
[I] BuilderOptimizationLevel: -1
[I] Input(s)s format: fp32:CHW
[I] Output(s)s format: fp32:CHW
[I] Input build shapes: model
[I] Input calibration shapes: model
[I] === System Options ===
[I] Device: 0
[I] DLACore:
[I] Plugins:
[I] setPluginsToSerialize:
[I] dynamicPlugins:
[I] ignoreParsedPluginLibs: 0
[I]
[I] === Inference Options ===
[I] Batch: Explicit
[I] Input inference shapes: model
[I] Iterations: 10
[I] Duration: 3s (+ 200ms warm up)
[I] Sleep time: 0ms
[I] Idle time: 0ms
[I] Inference Streams: 1
[I] ExposeDMA: Disabled
[I] Data transfers: Enabled
[I] Spin-wait: Disabled
[I] Multithreading: Disabled
[I] CUDA Graph: Disabled
[I] Separate profiling: Disabled
[I] Time Deserialize: Disabled
[I] Time Refit: Disabled
[I] NVTX verbosity: 0
[I] Persistent Cache Ratio: 0
[I] Inputs:
[I] === Reporting Options ===
[I] Verbose: Disabled
[I] Averages: 10 inferences
[I] Percentiles: 90,95,99
[I] Dump refittable layers:Disabled
[I] Dump output: Disabled
[I] Profile: Disabled
[I] Export timing to JSON file:
[I] Export output to JSON file:
[I] Export profile to JSON file:
[I]
[I] === Device Information ===
[I] Selected Device: Orin
[I] Compute Capability: 8.7
[I] SMs: 8
[I] Device Global Memory: 7620 MiB
[I] Shared Memory per SM: 164 KiB
[I] Memory Bus Width: 128 bits (ECC disabled)
[I] Application Compute Clock Rate: 0.624 GHz
[I] Application Memory Clock Rate: 0.624 GHz
[I]
[I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.
[I]
[I] TensorRT version: 8.6.2
[I] Loading standard plugins
[I] [TRT] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 33, GPU 4508 (MiB)
[I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1154, GPU +1351, now: CPU 1223, GPU 5866 (MiB)
[I] Start parsing network model.
[I] [TRT] ----------------------------------------------------------------
[I] [TRT] Input filename:   /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx
[I] [TRT] ONNX IR version:  0.0.8
[I] [TRT] Opset version:    1
[I] [TRT] Producer name:    pytorch
[I] [TRT] Producer version: 2.3.0
[I] [TRT] Domain:
[I] [TRT] Model version:    0
[I] [TRT] Doc string:
[I] [TRT] ----------------------------------------------------------------
[I] [TRT] No importer registered for op: unicom_vision_transformer_PatchEmbedding_patch_embed_1. Attempting to import as plugin.
[I] [TRT] Searching for plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1, plugin_version: 1, plugin_namespace:
[E] [TRT] 3: getPluginCreator could not find plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1 version: 1
[E] [TRT] ModelImporter.cpp:768: While parsing node number 0 [unicom_vision_transformer_PatchEmbedding_patch_embed_1 -> "patch_embed_1"]:
[E] [TRT] ModelImporter.cpp:769: --- Begin node ---
[E] [TRT] ModelImporter.cpp:770: input: "l_x_"
[W] --workspace flag has been deprecated by --memPoolSize flag.
[I] === Model Options ===
[I] Format: ONNX
[I] Model: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx
[I] Output:
[I] === Build Options ===
[I] Max batch: explicit batch
[I] Memory Pools: workspace: 1024 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[I] minTiming: 1
[I] avgTiming: 8
[I] Precision: FP32+FP16
[I] LayerPrecisions:
[I] Layer Device Types:
[I] Calibration:
[I] Refit: Disabled
[I] Version Compatible: Disabled
[I] ONNX Native InstanceNorm: Disabled
[I] TensorRT runtime: full
[I] Lean DLL Path:
[I] Tempfile Controls: { in_memory: allow, temporary: allow }
[I] Exclude Lean Runtime: Disabled
[I] Sparsity: Disabled
[I] Safe mode: Disabled
[I] Build DLA standalone loadable: Disabled
[I] Allow GPU fallback for DLA: Disabled
[I] DirectIO mode: Disabled
[I] Restricted mode: Disabled
[I] Skip inference: Disabled
[I] Save engine: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.trt
[I] Load engine:
[I] Profiling verbosity: 0
[I] Tactic sources: Using default tactic sources
[I] timingCacheMode: local
[I] timingCacheFile:
[I] Heuristic: Disabled
[I] Preview Features: Use default preview flags.
[I] MaxAuxStreams: -1
[I] BuilderOptimizationLevel: -1
[I] Input(s)s format: fp32:CHW
[I] Output(s)s format: fp32:CHW
[I] Input build shapes: model
[I] Input calibration shapes: model
[I] === System Options ===
[I] Device: 0
[I] DLACore:
[I] Plugins:
[I] setPluginsToSerialize:
[I] dynamicPlugins:
[I] ignoreParsedPluginLibs: 0
[I]
[I] === Inference Options ===
[I] Batch: Explicit
[I] Input inference shapes: model
[I] Iterations: 10
[I] Duration: 3s (+ 200ms warm up)
[I] Sleep time: 0ms
[I] Idle time: 0ms
[I] Inference Streams: 1
[I] ExposeDMA: Disabled
[I] Data transfers: Enabled
[I] Spin-wait: Disabled
[I] Multithreading: Disabled
[I] CUDA Graph: Disabled
[I] Separate profiling: Disabled
[I] Time Deserialize: Disabled
[I] Time Refit: Disabled
[I] NVTX verbosity: 0
[I] Persistent Cache Ratio: 0
[I] Inputs:
[I] === Reporting Options ===
[I] Verbose: Enabled
[I] Averages: 10 inferences
[I] Percentiles: 90,95,99
[I] Dump refittable layers:Disabled
[I] Dump output: Disabled
[I] Profile: Disabled
[I] Export timing to JSON file:
[I] Export output to JSON file:
[I] Export profile to JSON file:
[I]
[I] === Device Information ===
[I] Selected Device: Orin
[I] Compute Capability: 8.7
[I] SMs: 8
[I] Device Global Memory: 7620 MiB
[I] Shared Memory per SM: 164 KiB
[I] Memory Bus Width: 128 bits (ECC disabled)
[I] Application Compute Clock Rate: 0.624 GHz
[I] Application Memory Clock Rate: 0.624 GHz
[I]
[I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.
[I]
[I] TensorRT version: 8.6.2
[I] Loading standard plugins
[V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[V] [TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[V] [TRT] Registered plugin creator - ::CoordConvAC version 1
[V] [TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[V] [TRT] Registered plugin creator - ::CropAndResize version 1
[V] [TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[V] [TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[V] [TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[V] [TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[V] [TRT] Registered plugin creator - ::ModulatedDeformConv2d version 1
[V] [TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[V] [TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[V] [TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[V] [TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[V] [TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[V] [TRT] Registered plugin creator - ::ProposalDynamic version 1
[V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[V] [TRT] Registered plugin creator - ::Proposal version 1
[V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[V] [TRT] Registered plugin creator - ::Region_TRT version 1
[V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[V] [TRT] Registered plugin creator - ::ROIAlign_TRT version 1
[V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[V] [TRT] Registered plugin creator - ::ScatterND version 1
[V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[V] [TRT] Registered plugin creator - ::Split version 1
[V] [TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[I] [TRT] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 33, GPU 5167 (MiB)
[V] [TRT] Trying to load shared library libnvinfer_builder_resource.so.8.6.2
[V] [TRT] Loaded shared library libnvinfer_builder_resource.so.8.6.2
[I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1154, GPU +995, now: CPU 1223, GPU 6203 (MiB)
[V] [TRT] CUDA lazy loading is enabled.
[I] Start parsing network model.
[I] [TRT] ----------------------------------------------------------------
[I] [TRT] Input filename:   /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx
[I] [TRT] ONNX IR version:  0.0.8
[I] [TRT] Opset version:    1
[I] [TRT] Producer name:    pytorch
[I] [TRT] Producer version: 2.3.0
[I] [TRT] Domain:
[I] [TRT] Model version:    0
[I] [TRT] Doc string:
[I] [TRT] ----------------------------------------------------------------
[V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1
[V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1
[V] [TRT] Plugin creator already registered - ::BatchTilePlugin_TRT version 1
[V] [TRT] Plugin creator already registered - ::Clip_TRT version 1
[V] [TRT] Plugin creator already registered - ::CoordConvAC version 1
[V] [TRT] Plugin creator already registered - ::CropAndResizeDynamic version 1
[V] [TRT] Plugin creator already registered - ::CropAndResize version 1
[V] [TRT] Plugin creator already registered - ::DecodeBbox3DPlugin version 1
[V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1
[V] [TRT] Plugin creator already registered - ::EfficientNMS_Explicit_TF_TRT version 1
[V] [TRT] Plugin creator already registered - ::EfficientNMS_Implicit_TF_TRT version 1
[V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1
[V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1
[V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1
[V] [TRT] Plugin creator already registered - ::GenerateDetection_TRT version 1
[V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1
[V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1
[V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1
[V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 2
[V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1
[V] [TRT] Plugin creator already registered - ::ModulatedDeformConv2d version 1
[V] [TRT] Plugin creator already registered - ::MultilevelCropAndResize_TRT version 1
[V] [TRT] Plugin creator already registered - ::MultilevelProposeROI_TRT version 1
[V] [TRT] Plugin creator already registered - ::MultiscaleDeformableAttnPlugin_TRT version 1
[V] [TRT] Plugin creator already registered - ::NMSDynamic_TRT version 1
[V] [TRT] Plugin creator already registered - ::NMS_TRT version 1
[V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1
[V] [TRT] Plugin creator already registered - ::PillarScatterPlugin version 1
[V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1
[V] [TRT] Plugin creator already registered - ::ProposalDynamic version 1
[V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1
[V] [TRT] Plugin creator already registered - ::Proposal version 1
[V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1
[V] [TRT] Plugin creator already registered - ::Region_TRT version 1
[V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1
[V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1
[V] [TRT] Plugin creator already registered - ::ROIAlign_TRT version 1
[V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1
[V] [TRT] Plugin creator already registered - ::ScatterND version 1
[V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1
[V] [TRT] Plugin creator already registered - ::Split version 1
[V] [TRT] Plugin creator already registered - ::VoxelGeneratorPlugin version 1
[V] [TRT] Adding network input: l_x_ with dtype: float32, dimensions: (1, 3, 224, 224)
[V] [TRT] Registering tensor: l_x_ for ONNX tensor: l_x_
[V] [TRT] Importing initializer: patch_embed.proj.weight
[V] [TRT] Importing initializer: patch_embed.proj.bias
[V] [TRT] Importing initializer: pos_embed
[V] [TRT] Importing initializer: blocks.0.norm1.weight
[V] [TRT] Importing initializer: blocks.0.norm1.bias
[V] [TRT] Importing initializer: blocks.0.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.0.attn.proj.weight
[V] [TRT] Importing initializer: blocks.0.attn.proj.bias
[V] [TRT] Importing initializer: blocks.0.norm2.weight
[V] [TRT] Importing initializer: blocks.0.norm2.bias
[V] [TRT] Importing initializer: blocks.0.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.0.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.0.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.0.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.1.norm1.weight
[V] [TRT] Importing initializer: blocks.1.norm1.bias
[V] [TRT] Importing initializer: blocks.1.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.1.attn.proj.weight
[V] [TRT] Importing initializer: blocks.1.attn.proj.bias
[V] [TRT] Importing initializer: blocks.1.norm2.weight
[V] [TRT] Importing initializer: blocks.1.norm2.bias
[V] [TRT] Importing initializer: blocks.1.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.1.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.1.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.1.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.2.norm1.weight
[V] [TRT] Importing initializer: blocks.2.norm1.bias
[V] [TRT] Importing initializer: blocks.2.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.2.attn.proj.weight
[V] [TRT] Importing initializer: blocks.2.attn.proj.bias
[V] [TRT] Importing initializer: blocks.2.norm2.weight
[V] [TRT] Importing initializer: blocks.2.norm2.bias
[V] [TRT] Importing initializer: blocks.2.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.2.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.2.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.2.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.3.norm1.weight
[V] [TRT] Importing initializer: blocks.3.norm1.bias
[V] [TRT] Importing initializer: blocks.3.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.3.attn.proj.weight
[V] [TRT] Importing initializer: blocks.3.attn.proj.bias
[V] [TRT] Importing initializer: blocks.3.norm2.weight
[V] [TRT] Importing initializer: blocks.3.norm2.bias
[V] [TRT] Importing initializer: blocks.3.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.3.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.3.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.3.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.4.norm1.weight
[V] [TRT] Importing initializer: blocks.4.norm1.bias
[V] [TRT] Importing initializer: blocks.4.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.4.attn.proj.weight
[V] [TRT] Importing initializer: blocks.4.attn.proj.bias
[V] [TRT] Importing initializer: blocks.4.norm2.weight
[V] [TRT] Importing initializer: blocks.4.norm2.bias
[V] [TRT] Importing initializer: blocks.4.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.4.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.4.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.4.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.5.norm1.weight
[V] [TRT] Importing initializer: blocks.5.norm1.bias
[V] [TRT] Importing initializer: blocks.5.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.5.attn.proj.weight
[V] [TRT] Importing initializer: blocks.5.attn.proj.bias
[V] [TRT] Importing initializer: blocks.5.norm2.weight
[V] [TRT] Importing initializer: blocks.5.norm2.bias
[V] [TRT] Importing initializer: blocks.5.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.5.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.5.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.5.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.6.norm1.weight
[V] [TRT] Importing initializer: blocks.6.norm1.bias
[V] [TRT] Importing initializer: blocks.6.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.6.attn.proj.weight
[V] [TRT] Importing initializer: blocks.6.attn.proj.bias
[V] [TRT] Importing initializer: blocks.6.norm2.weight
[V] [TRT] Importing initializer: blocks.6.norm2.bias
[V] [TRT] Importing initializer: blocks.6.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.6.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.6.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.6.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.7.norm1.weight
[V] [TRT] Importing initializer: blocks.7.norm1.bias
[V] [TRT] Importing initializer: blocks.7.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.7.attn.proj.weight
[V] [TRT] Importing initializer: blocks.7.attn.proj.bias
[V] [TRT] Importing initializer: blocks.7.norm2.weight
[V] [TRT] Importing initializer: blocks.7.norm2.bias
[V] [TRT] Importing initializer: blocks.7.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.7.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.7.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.7.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.8.norm1.weight
[V] [TRT] Importing initializer: blocks.8.norm1.bias
[V] [TRT] Importing initializer: blocks.8.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.8.attn.proj.weight
[V] [TRT] Importing initializer: blocks.8.attn.proj.bias
[V] [TRT] Importing initializer: blocks.8.norm2.weight
[V] [TRT] Importing initializer: blocks.8.norm2.bias
[V] [TRT] Importing initializer: blocks.8.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.8.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.8.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.8.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.9.norm1.weight
[V] [TRT] Importing initializer: blocks.9.norm1.bias
[V] [TRT] Importing initializer: blocks.9.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.9.attn.proj.weight
[V] [TRT] Importing initializer: blocks.9.attn.proj.bias
[V] [TRT] Importing initializer: blocks.9.norm2.weight
[V] [TRT] Importing initializer: blocks.9.norm2.bias
[V] [TRT] Importing initializer: blocks.9.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.9.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.9.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.9.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.10.norm1.weight
[V] [TRT] Importing initializer: blocks.10.norm1.bias
[V] [TRT] Importing initializer: blocks.10.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.10.attn.proj.weight
[V] [TRT] Importing initializer: blocks.10.attn.proj.bias
[V] [TRT] Importing initializer: blocks.10.norm2.weight
[V] [TRT] Importing initializer: blocks.10.norm2.bias
[V] [TRT] Importing initializer: blocks.10.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.10.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.10.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.10.mlp.fc2.bias
[V] [TRT] Importing initializer: blocks.11.norm1.weight
[V] [TRT] Importing initializer: blocks.11.norm1.bias
[V] [TRT] Importing initializer: blocks.11.attn.qkv.weight
[V] [TRT] Importing initializer: blocks.11.attn.proj.weight
[V] [TRT] Importing initializer: blocks.11.attn.proj.bias
[V] [TRT] Importing initializer: blocks.11.norm2.weight
[V] [TRT] Importing initializer: blocks.11.norm2.bias
[V] [TRT] Importing initializer: blocks.11.mlp.fc1.weight
[V] [TRT] Importing initializer: blocks.11.mlp.fc1.bias
[V] [TRT] Importing initializer: blocks.11.mlp.fc2.weight
[V] [TRT] Importing initializer: blocks.11.mlp.fc2.bias
[V] [TRT] Importing initializer: norm.weight
[V] [TRT] Importing initializer: norm.bias
[V] [TRT] Importing initializer: feature.0.weight
[V] [TRT] Importing initializer: feature.1.weight
[V] [TRT] Importing initializer: feature.1.bias
[V] [TRT] Importing initializer: feature.1.running_mean
[V] [TRT] Importing initializer: feature.1.running_var
[V] [TRT] Importing initializer: feature.2.weight
[V] [TRT] Importing initializer: feature.3.weight
[V] [TRT] Importing initializer: feature.3.bias
[V] [TRT] Importing initializer: feature.3.running_mean
[V] [TRT] Importing initializer: feature.3.running_var
[V] [TRT] Parsing node: unicom_vision_transformer_PatchEmbedding_patch_embed_1_1 [unicom_vision_transformer_PatchEmbedding_patch_embed_1]
[V] [TRT] Searching for input: l_x_
[V] [TRT] Searching for input: patch_embed.proj.weight
[V] [TRT] Searching for input: patch_embed.proj.bias
[V] [TRT] unicom_vision_transformer_PatchEmbedding_patch_embed_1_1 [unicom_vision_transformer_PatchEmbedding_patch_embed_1] inputs: [l_x_ -> (1, 3, 224, 224)[FLOAT]], [patch_embed.proj.weight -> (768, 3, 32, 32)[FLOAT]], [patch_embed.proj.bias -> (768)[FLOAT]],
[I] [TRT] No importer registered for op: unicom_vision_transformer_PatchEmbedding_patch_embed_1. Attempting to import as plugin.
[I] [TRT] Searching for plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1, plugin_version: 1, plugin_namespace:
[V] [TRT] Local registry did not find unicom_vision_transformer_PatchEmbedding_patch_embed_1 creator. Will try parent registry if enabled.
[V] [TRT] Global registry did not find unicom_vision_transformer_PatchEmbedding_patch_embed_1 creator. Will try parent registry if enabled.
[E] [TRT] 3: getPluginCreator could not find plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1 version: 1
[E] [TRT] ModelImporter.cpp:768: While parsing node number 0 [unicom_vision_transformer_PatchEmbedding_patch_embed_1 -> "patch_embed_1"]:
[E] [TRT] ModelImporter.cpp:769: --- Begin node ---
[E] [TRT] ModelImporter.cpp:770: input: "l_x_"
input: "patch_embed.proj.weight"
input: "patch_embed.proj.bias"
output: "patch_embed_1"
name: "unicom_vision_transformer_PatchEmbedding_patch_embed_1_1"
op_type: "unicom_vision_transformer_PatchEmbedding_patch_embed_1"
doc_string: ""
domain: "pkg.unicom"

input: "patch_embed.proj.weight"
input: "patch_embed.proj.bias"
output: "patch_embed_1"
name: "unicom_vision_transformer_PatchEmbedding_patch_embed_1_1"
op_type: "unicom_vision_transformer_PatchEmbedding_patch_embed_1"
doc_string: ""
domain: "pkg.unicom"

[E] [TRT] ModelImporter.cpp:771: --- End node ---
[E] [TRT] ModelImporter.cpp:773: ERROR: builtin_op_importers.cpp:5403 In function importFallbackPluginImporter:
[E] [TRT] ModelImporter.cpp:771: --- End node ---
[E] [TRT] ModelImporter.cpp:773: ERROR: builtin_op_importers.cpp:5403 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[E] Failed to parse onnx file
[I] Finished parsing network model. Parse time: 4.99544
[E] Parsing model failed
[E] Failed to create engine from model or file.
[E] Engine set up failed
[E] Failed to parse onnx file
[I] Finished parsing network model. Parse time: 13.1481
[E] Parsing model failed
[E] Failed to create engine from model or file.
[E] Engine set up failed

问题似乎来自 PatchEmbedding 这里 并且模型似乎没有使用任何特殊的方法和层不能由 TensorRT 转换。这是该类的源代码:

class PatchEmbedding(nn.Module):
    def __init__(self, input_size=224, patch_size=32, in_channels: int = 3, dim: int = 768):
        super().__init__()
        if isinstance(input_size, int):
            input_size = (input_size, input_size)
        if isinstance(patch_size, int):
            patch_size = (patch_size, patch_size)
        H = input_size[0] // patch_size[0]
        W = input_size[1] // patch_size[1]
        self.num_patches = H * W
        self.proj = nn.Conv2d(
            in_channels, dim, kernel_size=patch_size, stride=patch_size)

    def forward(self, x):
        x = self.proj(x).flatten(2).transpose(1, 2)
        return x

我应该怎样做才能使模型可转换为 TensorRT?

谢谢

## Environment

**TensorRT Version**:  tensorrt_version_8_6_2_3

**GPU Type**: Jetson Orin Nano

**Nvidia Driver Version**:

**CUDA Version**: 12.2

**CUDNN Version**:  8.9.4.25-1+cuda12.2

**Operating System + Version**: Jetpack 6.0

**Python Version (if applicable)**: 3.10

**PyTorch Version (if applicable)**:  2.3.0

**ONNX Version (if applicable)**:  1.16.1

**onnxruntime-gpu Version (if applicable)**:  1.17.0

**onnxscript Version (if applicable)**:  0.1.0.dev20240721

错误信息表明 TensorRT 找不到名为“unicom_vision_transformer_PatchEmbedding_patch_embed_1”的插件。 这个问题通常发生在尝试转换包含 TensorRT 不原生支持的自定义层或操作的 ONNX 模型时。

在你的情况下, PatchEmbedding 类本身不是问题,因为它只使用标准的 PyTorch 操作( nn.Conv2d .flatten .transpose ),这些操作可以被 ONNX 和 TensorRT 理解。

解决方法:

你不需要创建自定义插件,因为 PatchEmbedding 中的所有操作都受 TensorRT 支持。问题出在 ONNX 导出过程中。PyTorch Dynamo 有时会错误地将标准操作标记为自定义操作。

解决这个问题的方法是避免使用 torch.onnx.dynamo_export ,而使用 torch.onnx.export 进行跟踪导出。

以下是更新后的代码:

import torch
import onnx
import onnxruntime

from unicom.vision_transformer import build_model

if __name__ == '__main__':
    model_name = "ViT-B/32"
    model_name_fp16 = "FP16-ViT-B-32"
    onnx_model_path = f"{model_name_fp16}.onnx"

    model = build_model(model_name)
    model.eval()
    model = model.to('cuda')
    torch_input = torch.randn(1, 3, 224, 224).to('cuda')

    # 使用 torch.onnx.export 进行跟踪导出
    with torch.no_grad():
        torch.onnx.export(
            model,
            torch_input,
            onnx_model_path,
            opset_version=13,  # 使用最新的 opset 版本
            input_names=['input'],
            output_names=['output'],
            dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}},
        )

    onnx_model = onnx.load(onnx_model_path)
    onnx.checker.check_model(onnx_model_path)

说明:

  1. torch.onnx.export : 我们使用跟踪导出方法,它通过实际运行模型并记录执行的操作来创建 ONNX 图。
  2. opset_version=13 : 使用最新的 opset 版本可以提高与 TensorRT 的兼容性。
  3. input_names output_names : 为模型的输入和输出指定名称。
  4. dynamic_axes : 指定哪些维度在运行时可以改变,这对于支持不同的批处理大小很有用。

使用更新后的代码导出 ONNX 模型后,再次尝试使用 trtexec 命令将其转换为 TensorRT 引擎。 这次应该可以成功创建引擎,而不会出现“找不到插件”错误。

标签:python,pytorch,onnx,tensorrt
From: 78787534

相关文章

  • 如何在 Mac 上运行 Python 文件来读取 txt 文件并将其写入外部硬盘?
    我目前有一个充满了我想阅读的epub的文件夹,一个我已经阅读过并想再次阅读的epub的文件夹,以及一个相应的文件,其中每个文件都有epub文件的名称。问题是,这些文件夹仅位于我的外部硬盘上。我想要做的是让我的脚本解析这些文件夹中的epub列表,并在我的下载文件夹中创建最新的副......
  • 深入探索:使用Python进行网站数据加载逻辑分析与请求
    作为一名资深的Python程序员,我经常需要从网站中提取数据以供分析或进一步处理。这项任务涉及到对网站数据加载逻辑的深入分析,以及使用Python进行高效的网络请求。在本文中,我将分享如何分析网站的数据加载方式,并使用Python的requests库来模拟浏览器行为,获取所需的数据。网站......
  • 如何将 Python 列表添加到 Excel 中已有值的列的末尾?
    我目前正在尝试编写一个程序,将值附加到列表中,然后将这些值添加到Excel数据表中的列中。每次运行该程序时,我都希望在同一列的末尾添加更多值。所以我不确定如何解决这个问题,而且我在网上找到的其他答案也没有取得多大成功。以下是使用openpyxl库在Python中将......
  • 如何学习Python:糙快猛的大数据之路(学习地图)
    在这个AI和大数据主宰的时代,Python无疑是最炙手可热的编程语言之一。无论你是想转行还是提升技能,学习Python都是一个明智之选。但是,该如何开始呢?今天,让我们聊聊"糙快猛"的Python学习之道。什么是"糙快猛"学习法?"糙快猛"学习法,顾名思义,就是:糙:不追求完美,允许存......
  • Python 中 __get__ 方法的内部原理
    我正在摆弄描述符,结果碰壁了。我以为我可以像使用任何其他方法一样直接调用它,但显然,它似乎不一致或者我遗漏了一些东西。假设我有一个用作描述符的坐标类:|||还有一个Point类,它有2个坐标属性:classCoordinate:def__set_name__(self,owner,name):self._na......
  • 使用带有私钥的云前端生成签名 URL 的问题..使用 Python 3.7 为带有空格的 S3 对象生
    我在使用Python3.7为S3对象生成签名URL时遇到问题。具体来说,键中带有空格的对象的URL会导致“访问被拒绝”错误,而没有空格的对象的URL通常工作正常。但是,并非所有不带空格的对象都能正常工作,带空格的对象始终会失败。fromdatetimeimportdatetime,timedeltaimpo......
  • 有没有更好的方法来在存储库中的一组 python 程序之间共享公共代码
    当我想要快速、轻松地做许多不同的事情时,我会选择Python-即我总是会得到许多Python“程序”-例如一组脚本-或者如果我正在玩一些东西,一堆测试程序等-即始终是许多不同程序的松散集合。但是,我会分享某些内容。例如,如果我正在使用AI-我可能有30个左右完全不相......
  • 如何在Python中从两个不同长度的列表创建DataFrame,为第二个列表中的每个值重复第一个
    我是一个超级初学者,所以请耐心等待。我觉得这应该很容易,但我无法弄清楚。我不确定是否应该创建两个列表,然后将它们组合起来,或者是否有办法以这种方式直接创建DataFrame。我需要一列包含这些值:df=pd.DataFrame({'x1':np.linspace(-2.47,2.69,num=101)})然后我将值A......
  • Python multiprocessing.connection.Connection 的行为不符合规范
    根据python规范,recv()pythonConnection的方法,(从multiprocessing.Pipe()返回,当管道为空且管道的另一端关闭时抛出EOFError(这里参考:https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.connection.Connection.re......
  • 使用 python Flask 发送邮件中的图像
    我想发送一封包含html代码和图像的电子邮件但在gmail中它说图像已附加,我不想要这样,我只想要电子邮件正文中的图像。html_content=f"<imgsrc="cid:banner"alt=""style="width:80%;">"msg=MIMEMultipart('related')html_part=MIMEText(html_c......