mqtt_client keepalive
Posted: Fri Jun 09, 2023 11:27 am
Hi,
I'm using the ESP mqtt_client library with ESP-IDF v4.4.4 and v5.0.1.
In both SDKs, the real mqtt_client keepalive value used seems to be the half of the one set in the client configuration.
I have checked this looking at the mosquitto logs and looking for the PING_RES / PING_REQ from my client.
For example when I set a keepalive of 30s, I get PING_REQ/RES of around 15s (below the mosquitto logs):
Looking at the mqtt_client.c source code, I've encountered this code:
Looking at the code, it seems to me that the mqtt client performs a PING_REQ if the time interval between now and latest keepalive_tick is greater than keepalive_ms/2.
Why such implementation? Is there a real motivation to have the keepalive configuration value halved in the internal implementation of the mqtt library?
Reading about the mqtt keepalive, what I understand is that the mqtt client should send the PING_REQ once every keepalive interval, and should wait for the PING_RESP from the broker for half of the keepalive.
This behaviour is completely overturned in the current implementation...
I'm using the ESP mqtt_client library with ESP-IDF v4.4.4 and v5.0.1.
In both SDKs, the real mqtt_client keepalive value used seems to be the half of the one set in the client configuration.
I have checked this looking at the mosquitto logs and looking for the PING_RES / PING_REQ from my client.
For example when I set a keepalive of 30s, I get PING_REQ/RES of around 15s (below the mosquitto logs):
Code: Select all
1686308238: Received PINGREQ from CLI-0000f412fad7a3e0
1686308238: Sending PINGRESP to CLI-0000f412fad7a3e0
1686308254: Received PINGREQ from CLI-0000f412fad7a3e0
1686308254: Sending PINGRESP to CLI-0000f412fad7a3e0
1686308270: Received PINGREQ from CLI-0000f412fad7a3e0
1686308270: Sending PINGRESP to CLI-0000f412fad7a3e0
Code: Select all
static inline bool has_timed_out(uint64_t last_tick, uint64_t timeout) {
uint64_t next = last_tick + timeout;
return (int64_t)(next - platform_tick_get_ms()) <= 0;
}
static esp_err_t process_keepalive(esp_mqtt_client_handle_t client)
{
if (client->connect_info.keepalive > 0) {
const uint64_t keepalive_ms = client->connect_info.keepalive * 1000;
if (client->wait_for_ping_resp == true ) {
if (has_timed_out(client->keepalive_tick, keepalive_ms)) {
ESP_LOGE(TAG, "No PING_RESP, disconnected");
esp_mqtt_abort_connection(client);
client->wait_for_ping_resp = false;
return ESP_FAIL;
}
return ESP_OK;
}
if (has_timed_out(client->keepalive_tick, keepalive_ms/2)) {
if (esp_mqtt_client_ping(client) == ESP_FAIL) {
ESP_LOGE(TAG, "Can't send ping, disconnected");
esp_mqtt_abort_connection(client);
return ESP_FAIL;
}
client->wait_for_ping_resp = true;
return ESP_OK;
}
}
return ESP_OK;
}
Why such implementation? Is there a real motivation to have the keepalive configuration value halved in the internal implementation of the mqtt library?
Reading about the mqtt keepalive, what I understand is that the mqtt client should send the PING_REQ once every keepalive interval, and should wait for the PING_RESP from the broker for half of the keepalive.
This behaviour is completely overturned in the current implementation...