Glad to hear the discussion. From an outsider’s point of view, there are few things I don’t get.
First, if each JACK app is a separate process, then theoretically you have to do a bunch of expensive process context switches for each audio buffer.
And then there is interprocess communication. Uses shared memory buffers?
As opposed to a typical plug-in architecture where everything runs under the host process. It is amazing to me that an interprocess scheme wouldn’t run into major problems under compute load when running with the small buffers needed for low latency.
What am I missing here?
The other thing I’m used to seeing in a plug in API is setup for buffer sizes, sample rates, audio IO. The host centralizes this process.
Then there is advertising your pins and capabilities. I was mystified when looking at the simple JACK examples that I didn’t see code for dealing with these issues (e.g. being sample rate aware).
_______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx https://lists.linuxaudio.org/listinfo/linux-audio-user