Is there a method/implementation built into the audio_pipeline to take a stereo element, and simultaneously link other stereo elements with it?
As an example, if I want to hear realtime DSP, I use i2s_reader (from CODEC) ---> DSP element in pipeline (i.e. EQ) ---> i2s_writer (back to codec) - simple enough. But now say I want to feed EQ into i2s_writer - for local low latency monitoring - and also feed EQ into an HTTP stream in the pipeline so that someone else can hear what I'm doing over a server? So, is it possible for EQ to feed 2 writers, or is it possible for i2s_writer to feed both hardware and http stream?
i2s_reader ------> DSP -----> i2s_writer
......................... \_____http_stream
OR
i2s_reader -----> DSP -----> i2s_writer ----> http_stream
Parallel output to components in the audio_pipline
-
- Posts: 4
- Joined: Mon Aug 24, 2020 9:45 am
-
- Posts: 11
- Joined: Thu Aug 13, 2020 5:47 am
Re: Parallel output to components in the audio_pipline
I am looking at doing the same type of thing. i2s-->bt a2dp -->wav-->file. Have you found any way to do it? It looks like there can only be one stream writer at a time.
-
- Posts: 4
- Joined: Mon Aug 24, 2020 9:45 am
Re: Parallel output to components in the audio_pipline
There appears to be a way to use ring buffer is multi in, and multi out - but I don't think any of the elements that you can register to the pipeline can actually do this right now, so it would be either using the ring buffer API raw and writing your own pipe elements, or just creating your own pipeline without explicitly calling the pipeline API...
Who is online
Users browsing this forum: No registered users and 7 guests