Please use. NET 5,. NET 6, NET 7,. NET 8, NET Framework 4.6, NET Framework 4.61, NET Framework 4.7, NET Framework 4.72, NET Framework 4.8, NET Framework 4.81, and. NET Core 3.1 versions

OnnxRuntimeInferEngine 类

ONNX Runtime inference engine implementation ONNX Runtime推理引擎实现

Definition

命名空间: DeploySharp.Engine
程序集: DeploySharp (在 DeploySharp.dll 中) 版本:0.0.4+6e8a2e904469617cd59619d666c0e272985c0e33
C#
public class OnnxRuntimeInferEngine : IModelInferEngine, 
	IDisposable
Inheritance
Object    OnnxRuntimeInferEngine
Implements
IModelInferEngine, IDisposable

备注

This class provides ONNX Runtime-specific implementation of the IModelInferEngine interface, supporting multiple execution providers (CPU, CUDA, TensorRT, OpenVINO, etc.) 该类提供IModelInferEngine接口的ONNX Runtime特定实现,支持多种执行提供程序(CPU、CUDA、TensorRT、OpenVINO等)

The engine handles model loading, session management, and tensor data conversions. 该引擎处理模型加载、会话管理和张量数据转换

构造函数

OnnxRuntimeInferEngine Initializes a new instance of the ONNX Runtime inference engine 初始化ONNX Runtime推理引擎的新实例

方法

CreateInvalidProviderException Creates standardized exception for invalid execution provider combinations 为无效的执行提供程序组合创建标准化异常
Dispose Releases all resources used by the inference engine 释放推理引擎使用的所有资源
EqualsDetermines whether the specified object is equal to the current object.
(继承自 Object。)
ExecuteModelInference Executes model inference session 执行模型推理会话
FinalizeAllows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(继承自 Object。)
GetHashCodeServes as the default hash function.
(继承自 Object。)
GetTypeGets the Type of the current instance.
(继承自 Object。)
LoadModel Loads and configures the ONNX model based on the provided configuration 根据提供的配置加载和配置ONNX模型
MemberwiseCloneCreates a shallow copy of the current Object.
(继承自 Object。)
Predict Executes model inference/prediction using the provided input tensor 使用提供的输入张量执行模型推理/预测
PrepareInputTensors Converts input DataTensor to ONNX Runtime format 将输入DataTensor转换为ONNX Runtime格式
ProcessBoolOutput Processes bool output tensor (converted to byte) and adds to result collection 处理bool输出张量(转换为byte)并添加到结果集合
ProcessFloatOutput Processes float32 output tensor and adds to result collection 处理float32输出张量并添加到结果集合
ProcessIntOutput Processes int32 output tensor and adds to result collection 处理int32输出张量并添加到结果集合
ProcessOutputTensors Converts ONNX Runtime outputs to DataTensor format 将ONNX Runtime输出转换为DataTensor格式
ToStringReturns a string that represents the current object.
(继承自 Object。)

字段

inferenceSession ONNX Runtime inference session ONNX Runtime推理会话
inputNodeSize Count of input nodes 输入节点数量
inputNodeTypes List of input node data types 输入节点类型列表
modelConfig Model configuration reference 模型配置引用
outputNodeSize Count of output nodes 输出节点数量
outputNodeTypes List of output node data types 输出节点类型列表
sessionOptions ONNX Runtime session options ONNX Runtime会话选项

参见