UART Baud Timing Pattern Interrupt / J1708
Posted: Tue May 28, 2019 5:54 pm
Hi,
I want to implement J1708 with ESP32. J1708 is a modified version of RS485.
I am using RS485 UART with modified tranceiver and everything is going well for sending and receiving data.
But the J1708 has a standard of time to distinguish messages. It's simple:
After a message, consisting of a maximum of 21 bytes, an idle time is placed on the bus. The idle time is at least 10 times the bit time.
The network runs at 9600 kbps, so the bit time is 104.16us and 10 times the bit time is 1.041ms.
I wonder if there is any way in hardware to detect idle time.
- I've already tried setting the "AT CMD" pattern to 0 bytes and the post_idle and pre_idle, but it does not work.
- I also tried to get interrupt every 1 byte, recover the time in microseconds and do the analysis in software. That should work, but for some reason it did not work out too well. Perhaps in this case it is the sender that is delaying the bytes and ending the messages. On the oscilloscope I did not see this break.
- I also tried to set the RX_TOUT_TRESH to 1 byte. This is basically the same time I need. I got the interruptions back, but he still broke the messages in half.
Any idea?
Thank you.
I want to implement J1708 with ESP32. J1708 is a modified version of RS485.
I am using RS485 UART with modified tranceiver and everything is going well for sending and receiving data.
But the J1708 has a standard of time to distinguish messages. It's simple:
After a message, consisting of a maximum of 21 bytes, an idle time is placed on the bus. The idle time is at least 10 times the bit time.
The network runs at 9600 kbps, so the bit time is 104.16us and 10 times the bit time is 1.041ms.
I wonder if there is any way in hardware to detect idle time.
- I've already tried setting the "AT CMD" pattern to 0 bytes and the post_idle and pre_idle, but it does not work.
- I also tried to get interrupt every 1 byte, recover the time in microseconds and do the analysis in software. That should work, but for some reason it did not work out too well. Perhaps in this case it is the sender that is delaying the bytes and ending the messages. On the oscilloscope I did not see this break.
- I also tried to set the RX_TOUT_TRESH to 1 byte. This is basically the same time I need. I got the interruptions back, but he still broke the messages in half.
Any idea?
Thank you.