ONNX Runtime ONNXRuntimeError: [ONNXRuntimeError] : 28 : FAIL : Unsupported model format

The model format is not supported by ONNX Runtime.

Understanding ONNX Runtime

ONNX Runtime is a high-performance inference engine for machine learning models in the Open Neural Network Exchange (ONNX) format. It is designed to accelerate the deployment of machine learning models across various platforms and devices, providing a flexible and efficient solution for developers and data scientists. ONNX Runtime supports a wide range of hardware accelerators and is optimized for both cloud and edge environments.

Identifying the Symptom

When working with ONNX Runtime, you might encounter the following error message: ONNXRuntimeError: [ONNXRuntimeError] : 28 : FAIL : Unsupported model format. This error indicates that the model you are trying to load or execute is not in a format that ONNX Runtime can process.

Common Scenarios

  • Attempting to load a model that is not in the ONNX format.
  • Using an outdated or incompatible version of ONNX Runtime.

Exploring the Issue

The error code 28 signifies a failure due to an unsupported model format. ONNX Runtime is specifically designed to work with models in the ONNX format. If a model is in a different format, such as TensorFlow, PyTorch, or another proprietary format, ONNX Runtime will not be able to process it.

Why This Happens

This issue typically arises when a model has not been converted to the ONNX format before being loaded into ONNX Runtime. It is crucial to ensure that the model is in the correct format to leverage the capabilities of ONNX Runtime.

Steps to Resolve the Issue

To resolve this issue, you need to convert your model to the ONNX format. Here are the steps to do so:

Step 1: Convert Your Model

Depending on the original format of your model, you can use various tools to convert it to ONNX. For example:

Example command for PyTorch:

import torch

# Assuming 'model' is your PyTorch model
dummy_input = torch.randn(1, 3, 224, 224) # Example input shape
torch.onnx.export(model, dummy_input, "model.onnx")

Step 2: Verify the ONNX Model

Once converted, verify the ONNX model to ensure it is valid. You can use the ONNX checker:

import onnx

# Load the ONNX model
onnx_model = onnx.load("model.onnx")

# Check the model
onnx.checker.check_model(onnx_model)

Step 3: Load the Model in ONNX Runtime

After conversion and verification, you can load the model in ONNX Runtime:

import onnxruntime as ort

# Load the ONNX model with ONNX Runtime
session = ort.InferenceSession("model.onnx")

Additional Resources

For more information on converting models to ONNX, refer to the ONNX Supported Tools page. Additionally, the ONNX Runtime documentation provides comprehensive guidance on using ONNX Runtime effectively.

Master

ONNX Runtime

in Minutes — Grab the Ultimate Cheatsheet

(Perfect for DevOps & SREs)

Most-used commands
Real-world configs/examples
Handy troubleshooting shortcuts
Your email is safe with us. No spam, ever.

Thankyou for your submission

We have sent the cheatsheet on your email!
Oops! Something went wrong while submitting the form.

ONNX Runtime

Cheatsheet

(Perfect for DevOps & SREs)

Most-used commands
Your email is safe with us. No spam, ever.

Thankyou for your submission

We have sent the cheatsheet on your email!
Oops! Something went wrong while submitting the form.

MORE ISSUES

Made with ❤️ in Bangalore & San Francisco 🏢

Doctor Droid