When I was using v4.4.2 ESP-IDF with whatever version of the gcc compiler it installs by default, I was using a `%d` specifier to print `int32_t`. If I used `%ld` it would give me a warning. When I updated to v5.0, which also uses a later version of the C compiler, the opposite occurred: an `int32_t` expected a `%ld` specifier (which I reckon is what I'd really expect) and will warn if `%d` is used.
This isn't a problem, but I'm surprised that this interpretation of `int32_t` changed. I know some compilers on some architectures would consider a 32-bit integer as a long, and some would consider a 64-bit as a long. But obviously my architecture hasn't changed. So the newer version of the compiler seems to consider `uint32_t` as a long on ESP32, whereas the older version did not. Has anyone else seen this? I'm curious as to why it is different.
`int32_t` in ESP v5.0 vs v4.4.2
Re: `int32_t` in ESP v5.0 vs v4.4.2
Thank you! Interesting discussion (I haven't gotten through half the thread yet!). It turns out not to have affected me much (two lines of my code, that's all). The primary answer to my question lies in the statement noted in there:
...we have changed the definition of uint32_t for Xtensa from unsigned int to unsigned long to match the way it is defined in upstream GCC, and also for the other architectures.
Who is online
Users browsing this forum: No registered users and 149 guests