OnnxRuntimeDeviceType 枚举
Defines hardware acceleration device types supported by ONNX Runtime
定义ONNX Runtime支持的硬件加速设备类型
命名空间: DeploySharp.Engine程序集: DeploySharp (在 DeploySharp.dll 中) 版本:0.0.4+6e8a2e904469617cd59619d666c0e272985c0e33
public enum OnnxRuntimeDeviceType
These correspond to different execution providers in ONNX Runtime.
这些对应ONNX Runtime中的不同执行提供程序
Default | 0 |
Uses default acceleration device:
CPU - ONNX Runtime custom acceleration engine
GPU - CUDA inference acceleration engine (default)
使用默认加速设备:
CPU - ONNX Runtime自定义加速引擎
GPU - CUDA推理加速引擎(默认)
|
OpenVINO | 1 |
Intel OpenVINO inference engine
Supports CPU/GPU/NPU devices with Intel hardware optimization
英特尔OpenVINO推理引擎
支持具有Intel硬件优化的CPU/GPU/NPU设备
|
Dnnl | 2 |
Intel oneDNN (formerly DNNL) acceleration
Deep Neural Network Library for CPU optimization
Intel oneDNN(原DNNL)加速
用于CPU优化的深度神经网络库
|
Cuda | 3 |
NVIDIA CUDA acceleration engine
Requires NVIDIA GPU support
Provides optimal CUDA core utilization
英伟达CUDA加速引擎
需要NVIDIA GPU支持
提供最佳的CUDA核心利用率
|
TensorRT | 4 |
NVIDIA TensorRT acceleration engine
Provides model optimization and quantization
Requires model pre-compilation
英伟达TensorRT加速引擎
提供模型优化和量化
需要模型预编译
|
DML | 5 |
Microsoft DirectML execution provider
Hardware-accelerated via DirectX 12
Supports both AMD and NVIDIA GPUs
微软DirectML执行提供程序
通过DirectX 12硬件加速
支持AMD和NVIDIA GPU
|
ROCm | 6 |
AMD ROCm acceleration platform
Supports AMD GPU devices
Open-source heterogeneous computing framework
AMD ROCm加速平台
支持AMD GPU设备
开源异构计算框架
|
MIGraphX | 7 |
AMD MIGraphX acceleration engine
Optimized for AMD GPUs
Supports model fusion optimization
AMD MIGraphX加速引擎
针对AMD GPU优化
支持模型融合优化
|