public class InferRequest : IDisposable
InferRequest | Constructs InferRequest from the initialized IntPtr. |
Ptr | [public]InferRequest class pointer. |
cancel | Cancels inference request. |
Dispose | Release unmanaged resources |
Finalize |
InferRequest's destructor
(Overrides ObjectFinalize) |
get_input_tensor | Gets an input tensor for inference. |
get_input_tensor(UInt64) | Gets an input tensor for inference. |
get_output_tensor | Gets an output tensor for inference. |
get_output_tensor(UInt64) | Gets an output tensor for inference. |
get_profiling_info | Queries performance measures per layer to identify the most time consuming operation. |
get_tensor(Input) | Gets an input/output tensor for inference. |
get_tensor(Node) | Gets an input/output tensor for inference by node. |
get_tensor(Output) | Gets an input/output tensor for inference. |
get_tensor(String) | Gets an input/output tensor for inference by tensor name. |
infer | Infers specified input(s) in synchronous mode. |
set_input_tensor(Tensor) | Sets an input tensor to infer models with single input. |
set_input_tensor(UInt64, Tensor) | Sets an input tensor to infer. |
set_output_tensor(Tensor) | Sets an output tensor to infer models with single output. |
set_output_tensor(UInt64, Tensor) | Sets an output tensor to infer. Index of the input preserved accross Model, CompiledModel, and InferRequest. |
set_tensor(Input, Tensor) | Sets an input/output tensor to infer. |
set_tensor(Node, Tensor) | Sets an input/output tensor to infer. |
set_tensor(Output, Tensor) | Sets an input/output tensor to infer. |
set_tensor(String, Tensor) | Sets an input/output tensor to infer on. |
start_async | Starts inference of specified input(s) in asynchronous mode. |
wait | Waits for the result to become available. Blocks until the result becomes available. |
wait_for | Waits for the result to become available. Blocks until the specified timeout has elapsed or the result becomes available, whichever comes first. |
m_ptr | [private]InferRequest class pointer. |