Page 1 of 1

Quantization Layer Not supported: Softmax and constant (AIV-702) #168

Posted: Thu Jul 25, 2024 12:31 am
by raprakashvi
Hi,
I am following the tutorial https://blog.espressif.com/hand-gesture ... 6d7e13fd37 and have been getting issue on quantization step.

My python version is 3.7 and whenever I get to quantization , I get the error:
Generating the quantization table:
Constant is not supported on esp-dl yet
LogSoftmax is not supported on esp-dl yet

My layers are simple and as shown below:
model = Sequential()
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(96, 96, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(6, activation='softmax'))

I have checked that softmax is supported in the layers. I am using calibrator.so file on linux. Should I be using convert.py to work around?

I have also tried replicating similar steps by training a model in pytorch and converting it to onxx. I have been able to optimize it but calibration is where it fails. Would appreciate your help and guidance. Thanks

-Ravi