Please use. NET 5,. NET 6, NET 7,. NET 8, NET Framework 4.6, NET Framework 4.61, NET Framework 4.7, NET Framework 4.72, NET Framework 4.8, NET Framework 4.81, and. NET Core 3.1 versions

IModelInferEngine 接口

Defines the core interface for model inference engines. 定义模型推理引擎的核心接口

Definition

命名空间: DeploySharp.Engine
程序集: DeploySharp (在 DeploySharp.dll 中) 版本:0.0.4+6e8a2e904469617cd59619d666c0e272985c0e33
C#
public interface IModelInferEngine : IDisposable
Implements
IDisposable

备注

This interface provides standardization for different inference implementations (ONNX Runtime, TensorRT, OpenVINO etc.) to ensure consistent behavior. 该接口为不同的推理实现(ONNX Runtime、TensorRT、OpenVINO等)提供标准化,确保行为一致。

All implementing classes must be thread-safe for Predict operations and properly manage native resources through IDisposable pattern. 所有实现类必须保证Predict操作的线程安全性,并通过IDisposable模式正确管理原生资源。

方法

Dispose Releases all resources used by the inference engine. 释放推理引擎使用的所有资源。
LoadModel Loads and initializes the model with specified configuration. 使用指定配置加载并初始化模型。
Predict Performs model prediction/inference on the input tensor. 对输入张量执行模型预测/推理

参见