[缺少 "T:DeploySharp.Model.Yolov13DetConfig" 的 <summary> 文档]
public class Yolov13DetConfig : Yolov8DetConfig
Yolov13DetConfig | Initializes a new instance with default values 使用默认值初始化新实例 |
Yolov13DetConfig(String) | 初始化 Yolov13DetConfig 类的一个新实例 |
Yolov13DetConfig(String, InferenceBackend, DeviceType, Single, Single, Int32, ImageResizeMode, ImageNormalizationType) | 初始化 Yolov13DetConfig 类的一个新实例 |
CategoryDict |
Dictionary mapping class IDs to human-readable names
将类别ID映射到可读名称的字典
(继承自 IConfig。) |
ConfidenceThreshold |
Confidence threshold for prediction filtering (0-1 range)
预测结果过滤的置信度阈值(0-1范围)
(继承自 YoloConfig。) |
DataProcessor |
Configuration for image data processing pipeline
图像数据处理管道的配置
(继承自 IImgConfig。) |
DynamicInput |
Whether the model expects dynamic input shapes
模型是否接受动态输入形状
(继承自 IConfig。) |
DynamicOutput |
Whether the model produces dynamic output shapes
模型是否产生动态输出形状
(继承自 IConfig。) |
InferBatch |
Default inference batch size (for dynamic input models)
默认推理批量大小(用于动态输入模型)
(继承自 IConfig。) |
InputNames | (继承自 IConfig。) |
InputSizes | (继承自 IConfig。) |
MaxBatchSize |
Maximum batch size capacity
最大批处理大小
(继承自 IConfig。) |
ModelPath |
Model file path (supports ONNX/TensorFlow/etc formats)
模型文件路径(支持ONNX/TensorFlow等格式)
(继承自 IConfig。) |
ModelType |
Type of the AI model (classification/detection/etc)
AI模型类型(分类/检测等)
(继承自 IConfig。) |
NmsThreshold |
Non-Maximum Suppression (NMS) threshold (0-1 range)
非极大值抑制(NMS)阈值(0-1范围)
(继承自 YoloConfig。) |
NonMaxSuppression |
Non-Maximum Suppression algorithm configuration
非极大值抑制算法配置
(继承自 YoloConfig。) |
NumThreads |
Number of CPU threads for inference
CPU推理线程数
(继承自 IConfig。) |
OutputNames |
Output tensor names
输出张量名称
(继承自 IConfig。) |
OutputSizes | (继承自 IConfig。) |
PrecisionMode |
Computation precision mode (FP32/FP16/INT8)
计算精度模式(FP32/FP16/INT8)
(继承自 IConfig。) |
TargetDeviceType |
Target execution device (CPU/GPU/NPU)
目标执行设备(CPU/GPU/NPU)
(继承自 IConfig。) |
TargetInferenceBackend |
Target inference backend (OpenVINO/TensorRT/ONNXRuntime/etc)
目标推理后端(OpenVINO/TensorRT/ONNXRuntime等)
(继承自 IConfig。) |
TargetOnnxRuntimeDeviceType |
ONNXRuntime specific device type
ONNXRuntime专用设备类型
(继承自 IConfig。) |
UseGPU |
Whether GPU acceleration is enabled
是否启用GPU加速
(继承自 IConfig。) |
AppendIfSetT |
Helper method for conditional string building
用于条件字符串构建的辅助方法
(继承自 IConfig。) |
Equals | Determines whether the specified object is equal to the current object. (继承自 Object。) |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (继承自 Object。) |
GetHashCode | Serves as the default hash function. (继承自 Object。) |
GetType | Gets the Type of the current instance. (继承自 Object。) |
MemberwiseClone | Creates a shallow copy of the current Object. (继承自 Object。) |
SetModelPath |
Sets the model path using fluent interface pattern
使用流式接口模式设置模型路径
(继承自 IConfig。) |
SetTargetDeviceType |
Sets the target device type using fluent interface
使用流式接口设置目标设备类型
(继承自 IConfig。) |
SetTargetInferenceBackend |
Sets the inference backend using fluent interface
使用流式接口设置推理后端
(继承自 IConfig。) |
SetTargetOnnxRuntimeDeviceType |
Sets ONNXRuntime-specific device type
设置ONNXRuntime专用设备类型
(继承自 IConfig。) |
ToString |
Generates detailed configuration summary string
生成详细的配置摘要字符串
(继承自 YoloConfig。) |