Page 1 of 1

Change ONNX model at runtime

Posted: Mon Nov 27, 2023 10:12 am
by TobiasUhmann
Hi everyone,

in our project, we have been using TensorFlow Lite for microcontrollers so far to run some AI models on the ESP32. But now, we want to run ONNX models that cannot be converted to TensorFlow Lite. So luckily we found out about ESP-DL, which even has a tutorial on running ONNX models: https://docs.espressif.com/projects/esp ... h-tvm.html

Unfortunately, however, I cannot find a way to change my ONNX model at runtime. The mentioned tutorial describes how to generate C++ code for a specific ONNX model and another tutorial shows how to write the code on your own. But there doesn't seem to be an API that simply takes an ONNX binary and dynamically builds the required objects in heap. I need this because we are updating the model at runtime and don't know what layers the model consists of etc. (the ONNX model is generated from sklearn, TensorFlow, etc.). In TensorFlow Lite I can load an arbitrary model like this:

Code: Select all

const tflite::Model *model = tflite::GetModel(this_model->binary);
Is there an equivalent in ESP-DL?

Thanks a lot in advance!

Re: Change ONNX model at runtime

Posted: Tue Nov 28, 2023 6:28 am
by BlueSkyB
If you use the TVM approach, the TVM architecture does not support change model at runtime.
ESP-DL does not support this feature too. Firstly, ESP-DL does not have graph parsing function, so there is no interface to import ONNX models. Secondly, there is no mechanism to change model at runtime.
Maybe you can implement the functionality by building the model layer by layer yourself.

Re: Change ONNX model at runtime

Posted: Tue Nov 28, 2023 7:48 am
by TobiasUhmann
Thanks a lot for the confirmation.