Best Practice : std::thread vs xTaskCreate

User avatar
PaulVdBergh
Posts: 58
Joined: Fri Feb 23, 2018 4:45 pm
Location: Brasschaat, Belgium

Best Practice : std::thread vs xTaskCreate

Postby PaulVdBergh » Thu Mar 01, 2018 10:51 am

Hi All,

As I'm new to ESP32 programming (and to FreeRTOS to be more specific), I'm wondering what's the best way to create a thread of execution in an application. I'm used to use this design pattern :

Code: Select all

#include <thread>
using namespace std;
class foo
{
	public:
		foo();
		~foo();
	private:
		void			threadFunc(void);
		thread		m_thread;
		bool			m_bContinue;
};

foo::foo()
:	m_bContinue(true)
{
	m_thread = thread([this]{threadFunc();});
}

foo::~foo()
{
	m_bContinue = false;
	m_thread.join();
}

void foo::threadFunc()
{
	while(m_bContinue)
	{
		// do some stuff
	}
}
However, reading the ESP32 documentation and examples, I mostly see the use of xTaskCreate. I suppose that the previous code can be rewritten as follows:

Code: Select all

class bar
{
	public:
		bar();
		~bar();
	private:
		static void	TaskFunc(void*);
		TaskHandle_t	m_pTask;
		bool			m_bContinue;
};

bar::bar()
:	m_bContinue(true)
{
	::xTaskCreate(&TaskFunc, "taskName", stacksize, pParams, priority, &m_pTask);
}

bar::~bar()
{
	m_bContinue = false;
	??  wait for task completion ??
	::vTaskDelete(m_pTask);
}

void bar::TaskFunc(void* params)
{
	while(m_bContinue)
	{
		//	do some stuff
	}
}
So, what are the pros and conns of both approaches, and which is the preffered one (and why)?

Thanks,
Paul

ESP_Sprite
Posts: 9708
Joined: Thu Nov 26, 2015 4:08 am

Re: Best Practice : std::thread vs xTaskCreate

Postby ESP_Sprite » Thu Mar 01, 2018 1:17 pm

Effectively, xTaskCreate is the native FreeRTOS function and is more flexible in that you can also set things like stack size. std::thread is also usable (and actually built on xTaskCreate), with the major advantage that you can seamlessly port the C++-code you write for that to other platforms. So it depends on what you want: if you need portability, you can use std::thread, if you need flexibility in tweaking task creation parameters you can use xTaskCreate.

User avatar
PaulVdBergh
Posts: 58
Joined: Fri Feb 23, 2018 4:45 pm
Location: Brasschaat, Belgium

Re: Best Practice : std::thread vs xTaskCreate

Postby PaulVdBergh » Fri Mar 02, 2018 5:42 pm

ESP_Sprite wrote:if you need portability, you can use std::thread
How far goes "portability" ? Let me explain my situation : I'm writing a new version of a C/C++ system I wrote before for Debian armhf (Beaglebone Black). I'm reusing portions of the beaglebones code and try to implement them on ESP32 platform. The first part I'm porting is a dedicated signal generator. It's implemented on the BBB by using their PRUSS subsystem, two independent and very predictable dedicated processors. On the ESP32 the signals can be generated by the rmt peripheral and two gpio pins. The RMT peripheral and gpios are controlled by ESP-IDF, which in turn is based on FreeRTOS. The timing (synchronisation) between the start of the waveform (generated by rmt) and the toggling of the gpio pins is very strict. A context switch can disrupt this synchronization, thereby making the system malfunction. I think I can solve the sync problem by elevating the thread's priority :

Code: Select all

		TaskHandle_t taskHandle = (TaskHandle_t)(m_thread.native_handle());	//	m_thread is an std::thread
		UBaseType_t priority = uxTaskPriorityGet(taskHandle) + 1;
		vTaskPrioritySet(taskHandle, priority);
So, how far goes portability as a) the BBB uses pruss-subsystems, which are dedicated and thus not portable and b) the ESP uses the rmt peripheral, which is AFAIK also dedicated.

ATM, my generator class look as follows:

Code: Select all

/*
 * DCCGen.h
 *
 *  Created on: Feb 26, 2018
 *      Author: paulvdbergh
 */

#include "driver/gpio.h"
#include "driver/rmt.h"
#include <thread>

using namespace std;

#ifndef MAIN_DCCGEN_H_
#define MAIN_DCCGEN_H_

#include "DCCMessage.h"

namespace TBTIoT
{
	class DCCGen
	{
		public:
			typedef enum
			{
				PowerOff = 0,
				PowerOn
			} PowerState_t;

			DCCGen(	gpio_num_t RailcomGPIONum = GPIO_NUM_4,
					gpio_num_t PowerGPIONum = GPIO_NUM_2,
					gpio_num_t DccGPIONum = GPIO_NUM_0,
					rmt_channel_t channel = RMT_CHANNEL_0);

			virtual ~DCCGen();

		protected:
			virtual DCCMessage*	getNextDccMessage(void);


		private:
			void threadFunc(void);

			thread			m_thread;
			volatile bool 	m_bContinue;

			rmt_config_t 	m_rmtConfig;

			PowerState_t	m_PowerState;

			gpio_num_t		m_PowerGPIONum;
			gpio_num_t		m_RailcomGPIONum;
			gpio_num_t		m_DccGPIONum;
			rmt_channel_t	m_Channel;
	};
}	//	namespace TBTIoT

#endif /* MAIN_DCCGEN_H_ */

Code: Select all

/*
 * DCCGen.cpp
 *
 *  Created on: Feb 26, 2018
 *      Author: paulvdbergh
 */

#include "DCCGen.h"

#include "freertos/task.h"

#include <cstring>
#include <chrono>
#include <iostream>

using namespace std;

namespace TBTIoT
{

	DCCGen::DCCGen(	gpio_num_t RailcomGPIONum /* = GPIO_NUM_4 */,
					gpio_num_t PowerGPIONum /* = GPIO_NUM_2 */,
					gpio_num_t DccGPIONum /* = GPIO_NUM_0 */,
					rmt_channel_t channel /* = RMT_CHANNEL_0 */
				  )
	:	m_bContinue(true)
	,	m_PowerState(PowerOn)
	,	m_PowerGPIONum(PowerGPIONum)
	,	m_RailcomGPIONum(RailcomGPIONum)
	,	m_DccGPIONum(DccGPIONum)
	,	m_Channel(channel)
	{
		gpio_set_direction(m_PowerGPIONum, GPIO_MODE_OUTPUT);
		gpio_set_direction(m_RailcomGPIONum, GPIO_MODE_OUTPUT);

		ESP_ERROR_CHECK(gpio_set_level(m_PowerGPIONum, 0));
		ESP_ERROR_CHECK(gpio_set_level(m_RailcomGPIONum, 0));

		memset(&m_rmtConfig, 0, sizeof(m_rmtConfig));

	    m_rmtConfig.rmt_mode = RMT_MODE_TX;
	    m_rmtConfig.channel = m_Channel;
	    m_rmtConfig.clk_div = 0x01;
	    m_rmtConfig.gpio_num = m_DccGPIONum;
	    m_rmtConfig.mem_block_num = 1;
	    m_rmtConfig.tx_config.idle_output_en = 1;
	    m_rmtConfig.tx_config.idle_level = RMT_IDLE_LEVEL_LOW;

	    ESP_ERROR_CHECK(rmt_config(&m_rmtConfig));
	    ESP_ERROR_CHECK(rmt_driver_install(m_rmtConfig.channel, 0, 0));


	    m_thread = thread([this]{threadFunc();});
	}

	DCCGen::~DCCGen()
	{
		m_bContinue = false;
		m_thread.join();
	}

	DCCMessage* DCCGen::getNextDccMessage()
	{
		//	TODO write implementation
		return nullptr;
	}

	void DCCGen::threadFunc()
	{

		const uint8_t _idleMsg[] = {0x03, 0xFF, 0x00, 0xFF};
		DCCMessage IdleMessage(_idleMsg);

		const rmt_item32_t preamble_items[PREAMBLE_NBR_CYCLES] =
		{
			DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT,
			DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT,
			DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT,
			DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT, DCCMessage::DCC_ONE_BIT
		};

		rmt_item32_t pItems[64];
		uint16_t ItemCount = 0;

		DCCMessage* pNextMessage = nullptr;

		TaskHandle_t taskHandle = (TaskHandle_t)(m_thread.native_handle());
		UBaseType_t priority = uxTaskPriorityGet(taskHandle) + 1;
		vTaskPrioritySet(taskHandle, priority);

		while(m_bContinue)
		{
			//	send the preamble_items
			ESP_ERROR_CHECK(rmt_write_items(m_rmtConfig.channel, preamble_items, PREAMBLE_NBR_CYCLES, false));

			//	RailCom Gap.
			// meanwhile  (we have 1856 µS to spend...)
			usleep(28);
			//	Shutdown power
			gpio_set_level(m_PowerGPIONum, 0);
			usleep(2);
			//	Short the outputs
			gpio_set_level(m_RailcomGPIONum, 1);
			usleep(439);
			//	un-Short outputs
			gpio_set_level(m_RailcomGPIONum, 0);
			usleep(2);
			//	Power up Power
			if(m_PowerState == PowerOn)
			{
				gpio_set_level(m_PowerGPIONum, 1);
			}


			if(nullptr == (pNextMessage = getNextDccMessage()))
			{
				pNextMessage = &IdleMessage;
			}

			ItemCount = pNextMessage->getItemCount();
			ItemCount = (ItemCount > 64) ? 64 : ItemCount;
			memcpy(pItems, pNextMessage->getItems(), ItemCount * sizeof(rmt_item32_t));

			//	wait until preamble items finished
			ESP_ERROR_CHECK(rmt_wait_tx_done(m_rmtConfig.channel, PREAMBLE_WAIT_TIME));

			//	send dcc data items and wait until end of transmission
			ESP_ERROR_CHECK(rmt_write_items(m_rmtConfig.channel, pItems, ItemCount, true));
		}

		ESP_ERROR_CHECK(rmt_driver_uninstall(m_rmtConfig.channel));
	}
}	//	namespace TBTIoT

Who is online

Users browsing this forum: Bing [Bot] and 85 guests