public class OpenVinoInferEngine : IModelInferEngine,
IDisposable
OpenVinoInferEngine | Initializes a new instance of OpenVINO inference engine 初始化OpenVINO推理引擎 |
InputNodeCount | Number of input nodes in the model 输入节点数量 |
OutputNodeCount | Number of output nodes in the model 输出节点数量 |
AnalyzeInputNodes | Analyzes and records model input node properties 分析并记录模型输入节点属性 |
AnalyzeModelStructure | Analyzes complete model structure including inputs and outputs 分析完整的模型结构(包括输入和输出) |
AnalyzeOutputNodes | Analyzes and records model output node properties 分析并记录模型输出节点属性 |
CompileModel | Compiles the loaded model for target device 为指定设备编译已加载的模型 |
Dispose | Releases all resources used by the inference engine 释放推理引擎使用的所有资源 |
Equals | Determines whether the specified object is equal to the current object. (继承自 Object。) |
ExecuteInference | Executes synchronous inference 执行同步推理 |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (继承自 Object。) |
GetHashCode | Serves as the default hash function. (继承自 Object。) |
GetType | Gets the Type of the current instance. (继承自 Object。) |
InitializeInferenceRequests | |
IsShapeDynamic | Determines if a PartialShape contains dynamic dimensions 判断PartialShape是否包含动态维度 |
LoadModel | Loads and compiles the model with specified configuration 加载并使用指定配置编译模型 |
LoadModelInternal | Internal method to load OpenVINO model from specified path 从指定路径加载OpenVINO模型的内部方法 |
MemberwiseClone | Creates a shallow copy of the current Object. (继承自 Object。) |
Predict | Executes inference using the provided input tensor 使用提供的输入张量执行推理 |
ProcessOutputs | Processes and collects all output tensors 处理并收集所有输出张量 |
ProcessOutputTensor | Processes a single output tensor based on its type 根据类型处理单个输出张量 |
SetInputTensors | Sets input tensors for inference 为推理设置输入张量 |
ToString | Returns a string that represents the current object. (继承自 Object。) |
ValidateConfig | Validates the model configuration before loading 在加载前验证模型配置 |
ValidateInputTensor | Validates input tensor against model requirements 根据模型要求验证输入张量 |
compiledModel | Compiled model ready for inference 编译后的模型 |
core | OpenVINO Core instance responsible for managing inference engine OpenVINO核心对象,负责管理推理引擎 |
inferRequests | Pool of inference requests for parallel processing 推理请求池(用于并行处理) |
inputNodeTypes | List of input node element types 输入节点类型列表 |
model | Loaded OpenVINO model representation 加载的模型对象 |
modelConfig | Current model configuration 当前模型配置 |
outputNodeTypes | List of output node element types 输出节点类型列表 |