InferenceBackend 枚举
Represents software inference backends supported by the system.
表示系统支持的软件推理后端
命名空间: DeploySharp.Engine程序集: DeploySharp (在 DeploySharp.dll 中) 版本:0.0.4+6e8a2e904469617cd59619d666c0e272985c0e33
public enum InferenceBackend
Each backend provides optimized execution for specific hardware configurations.
每种后端为特定的硬件配置提供优化执行
OpenVINO | 0 |
Intel's OpenVINO toolkit
Optimized for Intel CPUs, integrated GPUs and VPUs
英特尔OpenVINO工具套件
针对Intel CPU、集成GPU和VPU优化
|
OnnxRuntime | 1 |
Microsoft's ONNX Runtime
Cross-platform inference accelerator with multiple execution providers
微软ONNX运行时
跨平台推理加速器,支持多执行提供程序
|
TensorRT | 2 |
NVIDIA's TensorRT
High-performance deep learning inference optimizer and runtime
英伟达TensorRT
高性能深度学习推理优化器和运行时
|