I tried to stay away from C++ exception handling on all my ESP32 family work, since legend has it that it's a costly device, in particular on embedded systems.
These days though, I'm writing a driver for an SPI device where a lot of the code is consisting of error handling to check whether the SPI transfer has worked OK (and other edge cases), which is probably the case in 99.99%.
Since I'm writing in C++ anyways and the caller site is so much better if you can just return the payload instead of the error code, I wonder whether I should rather introduce C++ exception handling for that.
The cost of C++ exception handling
-
- Posts: 1708
- Joined: Mon Oct 17, 2022 7:38 pm
- Location: Europe, Germany
Re: The cost of C++ exception handling
I do use C++ exceptions, specifically for the reasons you mentioned, i.e. much more simple/cleaner code flow and a better API.
I identified three.5 'costs' of exceptions so far:
1) Binary size: The code the compiler will generate under the hood solely for propagating potential exceptions reportedly amounts to 5-8% of increased binary code size.
2) Stack space: Exceptions seem to require quite a bunch of stack memory if thrown. IIRC, when testing I had to increase my task's stack size by 2kb to accomodate exception handling compared to the same task when not provoking an exception.
3) RAM for the "emergency pool": Will be lost 'forever' for the application. I assume 1-2kb would be sufficient, but haven't tried.
3.5) CPU time iff an exception is actually thrown. On an ESP32C3, I timed a trivial try { throw myex {}; } catch (const myex& e) {...} and it clocked in at around 180000-200000 CPU cycles (i.e. over 1ms!) used between throwing and catching the exception.
This or may may not sound bad, but compared to if(result == NOT_GOOD) { handle_error(); } catching an exception is slower by a factor of about 10000.
As a middle-ground, I also use 'result' objects (with operator bool()) where it makes sense, i.e. Result_t r = try_it(); if(r) { r.useResultValue(); }.
I identified three.5 'costs' of exceptions so far:
1) Binary size: The code the compiler will generate under the hood solely for propagating potential exceptions reportedly amounts to 5-8% of increased binary code size.
2) Stack space: Exceptions seem to require quite a bunch of stack memory if thrown. IIRC, when testing I had to increase my task's stack size by 2kb to accomodate exception handling compared to the same task when not provoking an exception.
3) RAM for the "emergency pool": Will be lost 'forever' for the application. I assume 1-2kb would be sufficient, but haven't tried.
3.5) CPU time iff an exception is actually thrown. On an ESP32C3, I timed a trivial try { throw myex {}; } catch (const myex& e) {...} and it clocked in at around 180000-200000 CPU cycles (i.e. over 1ms!) used between throwing and catching the exception.
This or may may not sound bad, but compared to if(result == NOT_GOOD) { handle_error(); } catching an exception is slower by a factor of about 10000.
As a middle-ground, I also use 'result' objects (with operator bool()) where it makes sense, i.e. Result_t r = try_it(); if(r) { r.useResultValue(); }.
-
- Posts: 168
- Joined: Sun May 22, 2022 2:42 pm
Re: The cost of C++ exception handling
I also did some measurements on my ESP32S3.
It looks like I'm down to 416µs here (measured with triggering GPIOs and looking via Oscilloscope) for the "throw" case.
The "try/catch" contributes in the nanosecond region, so that's pretty zero cost.
Provided that I almost **never** throw, this will improve my call site a lot and I can live with the performance impact in the "throw" case.
Thanks a lot!
It looks like I'm down to 416µs here (measured with triggering GPIOs and looking via Oscilloscope) for the "throw" case.
The "try/catch" contributes in the nanosecond region, so that's pretty zero cost.
Provided that I almost **never** throw, this will improve my call site a lot and I can live with the performance impact in the "throw" case.
Thanks a lot!
-
- Posts: 52
- Joined: Fri Aug 11, 2023 4:56 am
Re: The cost of C++ exception handling
IMO, this question has been well answered and the costs actually measured and considered instead of relying on folklore. Gold star.
As a reminder, especially to those coming from a background in C, you CAN have multiple values returned from a function. Sometimes that plays out really well and sometimes it just makes the call stack and argument convention terrible, but it IS a tool to keep handy in your toolbox. (In C you CAN return a struct, too...)
std::optional is another approach that works in some cases.
std::expected also lets you express some of these cases pretty well with a new syntax that, IMO, is even clearer. That example becomes something like
Admittedly, with a simple byte count like this, it's easy to say "use a negative value" or "use zero" or whatever, but if the actual argument is a std::vector<iobuffs> where an iobuf is a pair of start, len pairs or whatever, it gets more funky to try to smuggle an error code along inline like that.
I haven't looked the extensa code for these, but the RISC-V code (they generally track as they're both very normal systems with generous register counts - like pretty much everything from the last few decades) returns those values - that are probably already in register up in foo_writer anyway - as $a0 and $a1 from the caller. If it gets inlined, the costs for all this just disappear.
It's way cheaper than unwinding a throw.
These hammers don't fit all nails and this isn't an open invitation for the "C89 is all we ever need" crowd to show up for a fight, but it's a reminder that you have choices beyond exceptions.
As a reminder, especially to those coming from a background in C, you CAN have multiple values returned from a function. Sometimes that plays out really well and sometimes it just makes the call stack and argument convention terrible, but it IS a tool to keep handy in your toolbox. (In C you CAN return a struct, too...)
Code: Select all
auto foo_writer() {
return std::make_tuple(error_code, bytes_written);
}
{
...
auto [ error_code, bytes_written ] = foo_writer() ;
}
std::expected also lets you express some of these cases pretty well with a new syntax that, IMO, is even clearer. That example becomes something like
Code: Select all
std::expected<long, ErrorCode> foo_writer() {
if (device_is_on_fire) {
return std::unexpected { ErrorCode::DeviceCombusted }
...
return bytes_written;
}
{
auto io_result = foo_writer();
if (io_result.has_value() {
u.u_base += io_result.value();
} else {
u.u_error = io_result.error();
}
}
I haven't looked the extensa code for these, but the RISC-V code (they generally track as they're both very normal systems with generous register counts - like pretty much everything from the last few decades) returns those values - that are probably already in register up in foo_writer anyway - as $a0 and $a1 from the caller. If it gets inlined, the costs for all this just disappear.
It's way cheaper than unwinding a throw.
These hammers don't fit all nails and this isn't an open invitation for the "C89 is all we ever need" crowd to show up for a fight, but it's a reminder that you have choices beyond exceptions.
-
- Posts: 1708
- Joined: Mon Oct 17, 2022 7:38 pm
- Location: Europe, Germany
Re: The cost of C++ exception handling
That'd be neat. Do you have any links/figures you can share w.r.t. the ESP32s?RandomInternetGuy wrote: ↑Tue Jan 02, 2024 4:29 amIMO, this question has been well answered and the costs actually measured and considered instead of relying on folklore. Gold star.
Unless you consider being able to steer the (exceptional) control flow without an if/else after every call the real benefit of exceptions...you have choices beyond exceptions.
Who is online
Users browsing this forum: Bing [Bot], Google [Bot] and 100 guests