public class Yolov8SegConfig : YoloConfig
Presets optimal default values for YOLOv8 detection models while allowing customization. Inherits all YOLO-specific configuration parameters from YoloConfig.
为YOLOv8检测模型预设了最优默认值,同时允许自定义配置。 从YoloConfig继承了所有YOLO特定的配置参数。
var config = new Yolov8DetConfig("yolov8s-seg.onnx")
{
TargetInferenceBackend = InferenceBackend.OpenVINO,
TargetDeviceType = DeviceType.CPU
};
var config = new Yolov8DetConfig(
modelPath: "yolov8s-seg.onnx",
inferenceBackend: InferenceBackend.OpenVINO,
deviceType: DeviceType.CPU,
confidenceThreshold: 0.7f
);
Yolov8SegConfig | Initializes a new instance with default values 使用默认值初始化新实例 |
Yolov8SegConfig(String) | Initializes a new instance with model path and reasonable defaults 使用模型路径和合理的默认值初始化新实例 |
Yolov8SegConfig(String, InferenceBackend, DeviceType, Single, Single, Int32, ImageResizeMode, ImageNormalizationType) | Fully customizable constructor for advanced configuration 可完全自定义配置的高级构造函数 |
CategoryDict |
Dictionary mapping class IDs to human-readable names
将类别ID映射到可读名称的字典
(继承自 IConfig。) |
ConfidenceThreshold |
Confidence threshold for prediction filtering (0-1 range)
预测结果过滤的置信度阈值(0-1范围)
(继承自 YoloConfig。) |
DataProcessor |
Configuration for image data processing pipeline
图像数据处理管道的配置
(继承自 IImgConfig。) |
DynamicInput |
Whether the model expects dynamic input shapes
模型是否接受动态输入形状
(继承自 IConfig。) |
DynamicOutput |
Whether the model produces dynamic output shapes
模型是否产生动态输出形状
(继承自 IConfig。) |
InferBatch |
Default inference batch size (for dynamic input models)
默认推理批量大小(用于动态输入模型)
(继承自 IConfig。) |
InputNames | (继承自 IConfig。) |
InputSizes | (继承自 IConfig。) |
MaxBatchSize |
Maximum batch size capacity
最大批处理大小
(继承自 IConfig。) |
ModelPath |
Model file path (supports ONNX/TensorFlow/etc formats)
模型文件路径(支持ONNX/TensorFlow等格式)
(继承自 IConfig。) |
ModelType |
Type of the AI model (classification/detection/etc)
AI模型类型(分类/检测等)
(继承自 IConfig。) |
NmsThreshold |
Non-Maximum Suppression (NMS) threshold (0-1 range)
非极大值抑制(NMS)阈值(0-1范围)
(继承自 YoloConfig。) |
NonMaxSuppression |
Non-Maximum Suppression algorithm configuration
非极大值抑制算法配置
(继承自 YoloConfig。) |
NumThreads |
Number of CPU threads for inference
CPU推理线程数
(继承自 IConfig。) |
OutputNames |
Output tensor names
输出张量名称
(继承自 IConfig。) |
OutputSizes | (继承自 IConfig。) |
PrecisionMode |
Computation precision mode (FP32/FP16/INT8)
计算精度模式(FP32/FP16/INT8)
(继承自 IConfig。) |
TargetDeviceType |
Target execution device (CPU/GPU/NPU)
目标执行设备(CPU/GPU/NPU)
(继承自 IConfig。) |
TargetInferenceBackend |
Target inference backend (OpenVINO/TensorRT/ONNXRuntime/etc)
目标推理后端(OpenVINO/TensorRT/ONNXRuntime等)
(继承自 IConfig。) |
TargetOnnxRuntimeDeviceType |
ONNXRuntime specific device type
ONNXRuntime专用设备类型
(继承自 IConfig。) |
UseGPU |
Whether GPU acceleration is enabled
是否启用GPU加速
(继承自 IConfig。) |
AppendIfSetT |
Helper method for conditional string building
用于条件字符串构建的辅助方法
(继承自 IConfig。) |
Equals | Determines whether the specified object is equal to the current object. (继承自 Object。) |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (继承自 Object。) |
GetHashCode | Serves as the default hash function. (继承自 Object。) |
GetType | Gets the Type of the current instance. (继承自 Object。) |
MemberwiseClone | Creates a shallow copy of the current Object. (继承自 Object。) |
SetModelPath |
Sets the model path using fluent interface pattern
使用流式接口模式设置模型路径
(继承自 IConfig。) |
SetTargetDeviceType |
Sets the target device type using fluent interface
使用流式接口设置目标设备类型
(继承自 IConfig。) |
SetTargetInferenceBackend |
Sets the inference backend using fluent interface
使用流式接口设置推理后端
(继承自 IConfig。) |
SetTargetOnnxRuntimeDeviceType |
Sets ONNXRuntime-specific device type
设置ONNXRuntime专用设备类型
(继承自 IConfig。) |
ToString |
Generates detailed configuration summary string
生成详细的配置摘要字符串
(继承自 YoloConfig。) |