The model inference function is provided by HMCT to support inference on intermediate models generated during the model conversion process.
| Member Function | Description | Return Value | Range of Legal Parameter Values |
| ORTExecutor.init(self, model: ModelProto) | Class initialization function, passing in an onnx ModelProto object. | None. | An onnx ModelProto object. |
| def create_session(self): -> InferenceSession | Create a session for inference. | InferenceSession for inference. | Parameterless. |
| def get_support_devices(cls) -> list[str] | Get all supported devices for the current ORTExecutor. | String list of currently supported devices. | Parameterless. |
| def to(self, device:Union[str, list[str]]) -> None | Modify model inference is to run DEVICE. | None. | 'cuda', 'cpu' or a list of both. |
| def get_inputs(self) -> list[str] | Get a list composed of NodeArg class of model inputs, where NodeArg class has three member variables: name represents the input name, type represents the input datatype, and shape represents the input shape. | List composed of NodeArg class of model inputs. | Parameterless. |
| def get_outputs(self) -> list[str] | Get a list composed of NodeArg class of model outputs, where NodeArg class has three member variables: name represents the output name, type represents the output datatype, and shape represents the output shape, NodeArg definition is the same as the description in get_inputs. | List composed of NodeArg class of model outputs. | Parameterless. |
| def inference(self, inputs:Dict[str, np.ndarray])->Dict[str, np.ndarray] | Perform one forward inference using the input data, get the inference result and return it. | A Dict, key is the model output name str, value is the output result np.ndarray | A dict with key as the string of the input name and value as the input to this inference |