Page 1 of 1

Understanding the audio processing of ADF

Posted: Mon Aug 16, 2021 6:21 pm
by danyo93
Hi folks,

so I got a problem understanding how exactly the whole data-flow, segmentation and timing works inside audio elements and pipelines. Say, I read an I2S-Stream at 24b/48k stereo which will read 96kx24bit of data each second. How is this audio-data segmented into the frames that will end up in the ring-buffer, i.e. how many seconds of audio are in one frame and is there a way to influence this? How does the scheduling of the internal callback-function calls for read->process->write work exactly? I really wanna understand this, because it makes trouble shooting a lot easier if you understand what's happening under the hood. Does anyone have a little more insight on this and cares to explain? It doesn't have to be an essay, a short explanation of the basic mechanism and a recommendation of which of the source files to read would already be super helpful. I already tried to understand the audio element source code, but I couldn't figure that particular part out.

Thanks a lot!

PS: It would be really nice if there were a more detailed documentation for this kind of stuff. The ADF-docs are in general often not really helpful.