DrDroid

ONNX Runtime ONNXRuntimeError: [ONNXRuntimeError] : 28 : FAIL : Unsupported model format

The model format is not supported by ONNX Runtime.

👤

Stuck? Let AI directly find root cause

AI that integrates with your stack & debugs automatically | Runs locally and privately

Download Now

What is ONNX Runtime ONNXRuntimeError: [ONNXRuntimeError] : 28 : FAIL : Unsupported model format

Understanding ONNX Runtime

ONNX Runtime is a high-performance inference engine for machine learning models in the Open Neural Network Exchange (ONNX) format. It is designed to accelerate the deployment of machine learning models across various platforms and devices, providing a flexible and efficient solution for developers and data scientists. ONNX Runtime supports a wide range of hardware accelerators and is optimized for both cloud and edge environments.

Identifying the Symptom

When working with ONNX Runtime, you might encounter the following error message: ONNXRuntimeError: [ONNXRuntimeError] : 28 : FAIL : Unsupported model format. This error indicates that the model you are trying to load or execute is not in a format that ONNX Runtime can process.

Common Scenarios

Attempting to load a model that is not in the ONNX format. Using an outdated or incompatible version of ONNX Runtime.

Exploring the Issue

The error code 28 signifies a failure due to an unsupported model format. ONNX Runtime is specifically designed to work with models in the ONNX format. If a model is in a different format, such as TensorFlow, PyTorch, or another proprietary format, ONNX Runtime will not be able to process it.

Why This Happens

This issue typically arises when a model has not been converted to the ONNX format before being loaded into ONNX Runtime. It is crucial to ensure that the model is in the correct format to leverage the capabilities of ONNX Runtime.

Steps to Resolve the Issue

To resolve this issue, you need to convert your model to the ONNX format. Here are the steps to do so:

Step 1: Convert Your Model

Depending on the original format of your model, you can use various tools to convert it to ONNX. For example:

For PyTorch models, use the PyTorch ONNX exporter. For TensorFlow models, use the tf2onnx tool.

Example command for PyTorch:

import torch# Assuming 'model' is your PyTorch modeldummy_input = torch.randn(1, 3, 224, 224) # Example input shapetorch.onnx.export(model, dummy_input, "model.onnx")

Step 2: Verify the ONNX Model

Once converted, verify the ONNX model to ensure it is valid. You can use the ONNX checker:

import onnx# Load the ONNX modelonnx_model = onnx.load("model.onnx")# Check the modelonnx.checker.check_model(onnx_model)

Step 3: Load the Model in ONNX Runtime

After conversion and verification, you can load the model in ONNX Runtime:

import onnxruntime as ort# Load the ONNX model with ONNX Runtimesession = ort.InferenceSession("model.onnx")

Additional Resources

For more information on converting models to ONNX, refer to the ONNX Supported Tools page. Additionally, the ONNX Runtime documentation provides comprehensive guidance on using ONNX Runtime effectively.

ONNX Runtime ONNXRuntimeError: [ONNXRuntimeError] : 28 : FAIL : Unsupported model format

TensorFlow

  • 80+ monitoring tool integrations
  • Long term memory about your stack
  • Locally run Mac App available
Read more

Time to stop copy pasting your errors onto Google!