ov_available_devices_free
|
Releases memory occpuied by ov_available_devices_t
|
ov_compiled_model_create_infer_request
|
Creates an inference request object used to infer the compiled model.
|
ov_compiled_model_export_model
|
Exports the current compiled model to an output stream `std::ostream`.
The exported model can also be imported via the ov::Core::import_model method.
|
ov_compiled_model_free
|
Release the memory allocated by ov_compiled_model_t.
|
ov_compiled_model_get_context
|
Returns pointer to device-specific shared context on a remote accelerator
device that was used to create this CompiledModel.
|
ov_compiled_model_get_property
|
Gets properties for current compiled model.
|
ov_compiled_model_get_runtime_model
|
Gets runtime model information from a device.
|
ov_compiled_model_input
|
Get the single const input port of ov_compiled_model_t, which only support single input model.
|
ov_compiled_model_input_by_index
|
Get a const input port of ov_compiled_model_t by port index.
|
ov_compiled_model_input_by_name
|
Get a const input port of ov_compiled_model_t by name.
|
ov_compiled_model_inputs_size
|
Get the input size of ov_compiled_model_t.
|
ov_compiled_model_output
|
Get the single const output port of ov_compiled_model_t, which only support single output model.
|
ov_compiled_model_output_by_index
|
Get a const output port of ov_compiled_model_t by port index.
|
ov_compiled_model_output_by_name
|
Get a const output port of ov_compiled_model_t by name.
|
ov_compiled_model_outputs_size
|
Get the output size of ov_compiled_model_t.
|
ov_compiled_model_set_property
|
Sets properties for a device, acceptable keys can be found in ov_property_key_xxx.
|
ov_const_port_get_shape
|
Get the shape of port object.
|
ov_core_compile_model(IntPtr, IntPtr, SByte, UInt64, IntPtr)
|
Creates a compiled model from a source model object. Users can create
as many compiled models as they need and use them simultaneously
(up to the limitation of the hardware resources).
|
ov_core_compile_model(IntPtr, IntPtr, SByte, UInt64, IntPtr, IntPtr, IntPtr)
|
|
ov_core_compile_model(IntPtr, IntPtr, SByte, UInt64, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr)
|
|
ov_core_compile_model(IntPtr, IntPtr, SByte, UInt64, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr)
|
|
ov_core_compile_model_from_file(IntPtr, SByte, SByte, UInt64, IntPtr)
|
Reads a model and creates a compiled model from the IR/ONNX/PDPD file.
This can be more efficient than using the ov_core_read_model_from_XXX + ov_core_compile_model flow,
especially for cases when caching is enabled and a cached model is available.
|
ov_core_compile_model_from_file(IntPtr, SByte, SByte, UInt64, IntPtr, IntPtr, IntPtr)
|
|
ov_core_compile_model_from_file(IntPtr, SByte, SByte, UInt64, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr)
|
|
ov_core_compile_model_from_file(IntPtr, SByte, SByte, UInt64, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr)
|
|
ov_core_compile_model_with_context
|
Creates a compiled model from a source model within a specified remote context.
|
ov_core_create
|
Constructs OpenVINO Core instance by default.
See RegisterPlugins for more details.
|
ov_core_create_context
|
Creates a new remote shared context object on the specified accelerator device
using specified plugin-specific low-level device API parameters (device handle, pointer, context, etc.).
|
ov_core_create_with_config
|
Constructs OpenVINO Core instance using XML configuration file with devices description.
See RegisterPlugins for more details.
|
ov_core_free
|
Release the memory allocated by ov_core_t.
|
ov_core_get_available_devices
|
Returns devices available for inference.
|
ov_core_get_default_context
|
Gets a pointer to default (plugin-supplied) shared context object for the specified accelerator device.
|
ov_core_get_property
|
Gets properties related to device behaviour.
The method extracts information that can be set via the set_property method.
|
ov_core_get_versions_by_device_name
|
Returns device plugins version information.
Device name can be complex and identify multiple devices at once like `HETERO:CPU,GPU`;
in this case, std::map contains multiple entries, each per device.
|
ov_core_import_model
|
Imports a compiled model from the previously exported one.
|
ov_core_read_model
|
Reads models from IR / ONNX / PDPD / TF / TFLite formats.
|
ov_core_read_model_from_memory_buffer
|
Reads models from IR / ONNX / PDPD / TF / TFLite formats.
|
ov_core_read_model_unicode
|
Reads models from IR / ONNX / PDPD / TF / TFLite formats.
|
ov_core_set_property(IntPtr, SByte, IntPtr, IntPtr)
|
|
ov_core_set_property(IntPtr, SByte, IntPtr, IntPtr, IntPtr, IntPtr)
|
Sets properties for a device, acceptable keys can be found in ov_property_key_xxx.
|
ov_core_set_property(IntPtr, SByte, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr, IntPtr)
|
|
ov_core_versions_free
|
Releases memory occupied by ov_core_version_list_t.
|
ov_dimension_is_dynamic
|
Check this dimension whether is dynamic
|
ov_free
|
free char
|
ov_get_error_info
|
Print the error info.
|
ov_get_last_err_msg
|
Get the last error msg.
|
ov_get_openvino_version
|
Get version of OpenVINO.
|
ov_infer_request_cancel
|
Cancel inference request.
|
ov_infer_request_free
|
Release the memory allocated by ov_infer_request_t.
|
ov_infer_request_get_input_tensor
|
Get an input tensor from the model with only one input tensor.
|
ov_infer_request_get_input_tensor_by_index
|
Get an input tensor by the index of input tensor.
|
ov_infer_request_get_output_tensor
|
Get an output tensor from the model with only one output tensor.
|
ov_infer_request_get_output_tensor_by_index
|
Get an output tensor by the index of output tensor.
|
ov_infer_request_get_profiling_info
|
Query performance measures per layer to identify the most time consuming operation.
|
ov_infer_request_get_tensor
|
Get an input/output tensor by the name of tensor.
|
ov_infer_request_get_tensor_by_const_port
|
Get an input/output tensor by const port.
|
ov_infer_request_get_tensor_by_port
|
Get an input/output tensor by port.
|
ov_infer_request_infer
|
Infer specified input(s) in synchronous mode.
|
ov_infer_request_set_callback
|
Set callback function, which will be called when inference is done.
|
ov_infer_request_set_input_tensor
|
Set an input tensor for the model with single input to infer on.
|
ov_infer_request_set_input_tensor_by_index
|
Set an input tensor to infer on by the index of tensor.
|
ov_infer_request_set_output_tensor
|
Set an output tensor to infer models with single output.
|
ov_infer_request_set_output_tensor_by_index
|
Set an output tensor to infer by the index of output tensor.
|
ov_infer_request_set_tensor
|
Set an input/output tensor to infer on by the name of tensor.
|
ov_infer_request_set_tensor_by_const_port
|
Set an input/output tensor to infer request for the port.
|
ov_infer_request_set_tensor_by_port
|
Set an input/output tensor to infer request for the port.
|
ov_infer_request_start_async
|
Start inference of specified input(s) in asynchronous mode.
|
ov_infer_request_wait
|
Wait for the result to become available.
|
ov_infer_request_wait_for
|
Waits for the result to become available. Blocks until the specified timeout has elapsed or the result becomes available,
whichever comes first.
|
ov_layout_create
|
Create a layout object.
|
ov_layout_free
|
Free layout object.
|
ov_layout_to_string
|
Convert layout object to a readable string.
|
ov_model_const_input
|
Get a const single input port of ov_model_t, which only support single input model.
|
ov_model_const_input_by_index
|
Get a const input port of ov_model_t by port index.
|
ov_model_const_input_by_name
|
Get a const input port of ov_model_t by name.
|
ov_model_const_output
|
Get a single const output port of ov_model_t, which only support single output model..
|
ov_model_const_output_by_index
|
Get a const output port of ov_model_t by port index.
|
ov_model_const_output_by_name
|
Get a const output port of ov_model_t by name.
|
ov_model_free
|
Release the memory allocated by ov_model_t.
|
ov_model_get_friendly_name
|
Gets the friendly name for a model.
|
ov_model_input
|
Get single input port of ov_model_t, which only support single input model.
|
ov_model_input_by_index
|
Get an input port of ov_model_t by port index.
|
ov_model_input_by_name
|
Get an input port of ov_model_t by name.
|
ov_model_inputs_size
|
Get the input size of ov_model_t.
|
ov_model_is_dynamic
|
Returns true if any of the ops defined in the model is dynamic shape..
|
ov_model_output
|
Get an single output port of ov_model_t, which only support single output model.
|
ov_model_output_by_index
|
Get an output port of ov_model_t by port index.
|
ov_model_output_by_name
|
Get an output port of ov_model_t by name.
|
ov_model_outputs_size
|
Get the output size of ov_model_t.
|
ov_model_reshape
|
Do reshape in model with a list of (name, partial shape).
|
ov_model_reshape_by_port_indexes
|
Do reshape in model with a list of (port id, partial shape).
|
ov_model_reshape_by_ports
|
Do reshape in model with a list of (ov_output_port_t, partial shape).
|
ov_model_reshape_input_by_name
|
Do reshape in model with partial shape for a specified name.
|
ov_model_reshape_single_input
|
Do reshape in model for one node(port 0).
|
ov_output_const_port_free
|
free const port
|
ov_output_port_free
|
free port object
|
ov_partial_shape_create
|
Initialze a partial shape with static rank and dynamic dimension.
|
ov_partial_shape_create_dynamic
|
Initialze a partial shape with static rank and dynamic dimension.
|
ov_partial_shape_create_static
|
Initialize a partial shape with static rank and static dimension.
|
ov_partial_shape_free
|
Release internal memory allocated in partial shape.
|
ov_partial_shape_is_dynamic
|
Check this partial_shape whether is dynamic
|
ov_partial_shape_to_shape
|
Convert partial shape without dynamic data to a static shape.
|
ov_partial_shape_to_string
|
Helper function, convert a partial shape to readable string.
|
ov_port_get_any_name
|
Get the tensor name of port.
|
ov_port_get_element_type
|
Get the tensor type of port.
|
ov_port_get_partial_shape
|
Get the partial shape of port.
|
ov_port_get_shape
|
Get the shape of port object.
|
ov_preprocess_input_info_free
|
Release the memory allocated by ov_preprocess_input_info_t.
|
ov_preprocess_input_info_get_model_info
|
Get current input model information.
|
ov_preprocess_input_info_get_preprocess_steps
|
Get a ov_preprocess_preprocess_steps_t.
|
ov_preprocess_input_info_get_tensor_info
|
Get a ov_preprocess_input_tensor_info_t.
|
ov_preprocess_input_model_info_free
|
Release the memory allocated by ov_preprocess_input_model_info_t.
|
ov_preprocess_input_model_info_set_layout
|
Set layout for model's input tensor.
|
ov_preprocess_input_tensor_info_free
|
Release the memory allocated by ov_preprocess_input_tensor_info_t.
|
ov_preprocess_input_tensor_info_set_color_format
|
Set ov_preprocess_input_tensor_info_t color format.
|
ov_preprocess_input_tensor_info_set_color_format_with_subname(IntPtr, UInt32, UInt64, IntPtr)
|
Set ov_preprocess_input_tensor_info_t color format with subname.
|
ov_preprocess_input_tensor_info_set_color_format_with_subname(IntPtr, UInt32, UInt64, IntPtr, IntPtr)
|
|
ov_preprocess_input_tensor_info_set_color_format_with_subname(IntPtr, UInt32, UInt64, IntPtr, IntPtr, IntPtr)
|
|
ov_preprocess_input_tensor_info_set_color_format_with_subname(IntPtr, UInt32, UInt64, IntPtr, IntPtr, IntPtr, IntPtr)
|
|
ov_preprocess_input_tensor_info_set_element_type
|
Set ov_preprocess_input_tensor_info_t precesion.
|
ov_preprocess_input_tensor_info_set_from
|
Helper function to reuse element type and shape from user's created tensor.
|
ov_preprocess_input_tensor_info_set_layout
|
Set ov_preprocess_input_tensor_info_t layout.
|
ov_preprocess_input_tensor_info_set_memory_type
|
Set ov_preprocess_input_tensor_info_t memory type.
|
ov_preprocess_input_tensor_info_set_spatial_static_shape
|
Set ov_preprocess_input_tensor_info_t spatial_static_shape.
|
ov_preprocess_output_info_free
|
Release the memory allocated by ov_preprocess_output_info_t.
|
ov_preprocess_output_info_get_tensor_info
|
Get a ov_preprocess_input_tensor_info_t.
|
ov_preprocess_output_set_element_type
|
Set ov_preprocess_input_tensor_info_t precesion.
|
ov_preprocess_output_tensor_info_free
|
Release the memory allocated by ov_preprocess_output_tensor_info_t.
|
ov_preprocess_prepostprocessor_build
|
Adds pre/post-processing operations to function passed in constructor.
|
ov_preprocess_prepostprocessor_create
|
Create a ov_preprocess_prepostprocessor_t instance.
|
ov_preprocess_prepostprocessor_free
|
Release the memory allocated by ov_preprocess_prepostprocessor_t.
|
ov_preprocess_prepostprocessor_get_input_info
|
Get the input info of ov_preprocess_prepostprocessor_t instance.
|
ov_preprocess_prepostprocessor_get_input_info_by_index
|
Get the input info of ov_preprocess_prepostprocessor_t instance by tensor order.
|
ov_preprocess_prepostprocessor_get_input_info_by_name
|
Get the input info of ov_preprocess_prepostprocessor_t instance by tensor name.
|
ov_preprocess_prepostprocessor_get_output_info
|
Get the output info of ov_preprocess_output_info_t instance.
|
ov_preprocess_prepostprocessor_get_output_info_by_index
|
Get the output info of ov_preprocess_output_info_t instance.
|
ov_preprocess_prepostprocessor_get_output_info_by_name
|
Get the output info of ov_preprocess_output_info_t instance.
|
ov_preprocess_preprocess_steps_convert_color
|
onvert ov_preprocess_preprocess_steps_t color.
|
ov_preprocess_preprocess_steps_convert_element_type
|
Convert ov_preprocess_preprocess_steps_t element type.
|
ov_preprocess_preprocess_steps_convert_layout
|
Add 'convert layout' operation to specified layout.
|
ov_preprocess_preprocess_steps_crop
|
Crop input tensor between begin and end coordinates.
|
ov_preprocess_preprocess_steps_free
|
Release the memory allocated by ov_preprocess_preprocess_steps_t.
|
ov_preprocess_preprocess_steps_mean
|
Add mean preprocess operation. Subtract specified value from each element of input.
|
ov_preprocess_preprocess_steps_mean_multi_channels
|
Add mean preprocess operation. Subtract each channel element of input by different specified value.
|
ov_preprocess_preprocess_steps_resize
|
Add resize operation to model's dimensions.
|
ov_preprocess_preprocess_steps_reverse_channels
|
Reverse channels operation.
|
ov_preprocess_preprocess_steps_scale
|
Add scale preprocess operation. Divide each element of input by specified value.
|
ov_preprocess_preprocess_steps_scale_multi_channels
|
Add scale preprocess operation. Divide each channel element of input by different specified value.
|
ov_profiling_info_list_free
|
Release the memory allocated by ov_profiling_info_list_t.
|
ov_rank_is_dynamic
|
Check this rank whether is dynamic
|
ov_shape_create
|
Initialize a fully shape object, allocate space for its dimensions
and set its content id dims is not null.
|
ov_shape_free
|
Free a shape object's internal memory.
|
ov_shape_to_partial_shape
|
Convert shape to partial shape.
|
ov_tensor_create
|
Constructs Tensor using element type and shape. Allocate internal host storage using default allocator.
|
ov_tensor_create_from_host_ptr
|
Constructs Tensor using element type and shape. Allocate internal host storage using default allocator.
|
ov_tensor_data
|
Provides an access to the underlaying host memory.
|
ov_tensor_free
|
Free ov_tensor_t.
|
ov_tensor_get_byte_size
|
the size of the current Tensor in bytes.
|
ov_tensor_get_element_type
|
Get type for tensor.
|
ov_tensor_get_shape
|
Get shape for tensor.
|
ov_tensor_get_size
|
the total number of elements (a product of all the dims or 1 for scalar).
|
ov_tensor_set_shape
|
Set new shape for tensor, deallocate/allocate if new total size is bigger than previous one.
|
ov_version_free
|
Release the memory allocated by ov_version_t.
|