Please use. NET 5,. NET 6, NET 7,. NET 8, NET Framework 4.6, NET Framework 4.61, NET Framework 4.7, NET Framework 4.72, NET Framework 4.8, NET Framework 4.81, and. NET Core 3.1 versions

OpenVinoInferEngine 类

OpenVINO inference engine implementation conforming to IModelInferEngine interface OpenVINO推理引擎实现类,遵循IModelInferEngine接口

Definition

命名空间: DeploySharp.Engine
程序集: DeploySharp (在 DeploySharp.dll 中) 版本:0.0.4+6e8a2e904469617cd59619d666c0e272985c0e33
C#
public class OpenVinoInferEngine : IModelInferEngine, 
	IDisposable
Inheritance
Object    OpenVinoInferEngine
Implements
IModelInferEngine, IDisposable

备注

This class provides: - Model loading and compilation - Synchronous inference execution - Dynamic shape support - Multi-device execution (CPU, GPU, NPU) 本类提供: - 模型加载和编译 - 同步推理执行 - 动态形状支持 - 多设备执行(CPU、GPU、NPU)

构造函数

OpenVinoInferEngine Initializes a new instance of OpenVINO inference engine 初始化OpenVINO推理引擎

属性

InputNodeCount Number of input nodes in the model 输入节点数量
OutputNodeCount Number of output nodes in the model 输出节点数量

方法

AnalyzeInputNodes Analyzes and records model input node properties 分析并记录模型输入节点属性
AnalyzeModelStructure Analyzes complete model structure including inputs and outputs 分析完整的模型结构(包括输入和输出)
AnalyzeOutputNodes Analyzes and records model output node properties 分析并记录模型输出节点属性
CompileModel Compiles the loaded model for target device 为指定设备编译已加载的模型
Dispose Releases all resources used by the inference engine 释放推理引擎使用的所有资源
EqualsDetermines whether the specified object is equal to the current object.
(继承自 Object。)
ExecuteInference Executes synchronous inference 执行同步推理
FinalizeAllows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(继承自 Object。)
GetHashCodeServes as the default hash function.
(继承自 Object。)
GetTypeGets the Type of the current instance.
(继承自 Object。)
InitializeInferenceRequests 
IsShapeDynamic Determines if a PartialShape contains dynamic dimensions 判断PartialShape是否包含动态维度
LoadModel Loads and compiles the model with specified configuration 加载并使用指定配置编译模型
LoadModelInternal Internal method to load OpenVINO model from specified path 从指定路径加载OpenVINO模型的内部方法
MemberwiseCloneCreates a shallow copy of the current Object.
(继承自 Object。)
Predict Executes inference using the provided input tensor 使用提供的输入张量执行推理
ProcessOutputs Processes and collects all output tensors 处理并收集所有输出张量
ProcessOutputTensor Processes a single output tensor based on its type 根据类型处理单个输出张量
SetInputTensors Sets input tensors for inference 为推理设置输入张量
ToStringReturns a string that represents the current object.
(继承自 Object。)
ValidateConfig Validates the model configuration before loading 在加载前验证模型配置
ValidateInputTensor Validates input tensor against model requirements 根据模型要求验证输入张量

字段

compiledModel Compiled model ready for inference 编译后的模型
core OpenVINO Core instance responsible for managing inference engine OpenVINO核心对象,负责管理推理引擎
inferRequests Pool of inference requests for parallel processing 推理请求池(用于并行处理)
inputNodeTypes List of input node element types 输入节点类型列表
model Loaded OpenVINO model representation 加载的模型对象
modelConfig Current model configuration 当前模型配置
outputNodeTypes List of output node element types 输出节点类型列表

参见