Re: multiJACK patch management: the first glimmerings of success

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/07/2016 05:15 PM, Paul Davis wrote:
> On Thu, Apr 7, 2016 at 10:53 AM, Markus Seeber <
> markus.seeber@xxxxxxxxxxxxxxx> wrote:
> 
>> On 04/07/2016 01:57 AM, Robin Gareus wrote:
>>>
>>> How would that effectively differ from running at twice the buffersize?
>>>
>>> That approach just moves the load to two CPU cores. If those two cores
>>> can produce the result in a given time, a single core can do the same
>>> with twice the buffersize. Identical total latency.
>>>
>>> 2c,
>>> robin
>>
>> This is called interleaved scheduling and gives a scheduler the option
>> to run tasks in parallel by interleaving two (or more) process cycles.
>> This differs  from using a double buffer size because the scheduler has
>> more opportunity to fill "holes" in the schedule, that are introduces by
>> client dependencies in the graph.
>>
>> Doing this by hand (as described by bridiging multiple jack instances)
>> moves the responsibility of interleaving the two jack schedules to the
>> kernel scheduler and may theoretically bring better resource usage in
>> some special cases, but also has some pitfalls.
>>
> 
> given that a process callback should NEVER enter the kernel or use any
> user-space blocking mechanisms, the only way this could possibly help is
> dealing with transitions between clients. it could NEVER result in better
> performance if kernel scheduling intervened within a single client's
> process callback.

Yes, I was referring (or intended to refer) to splitting and
interleaving between _between_ clients, not inside the process call of
one client. To be exact, this could happen only at the point where JACK
has to wait for multiple clients to finish, before it can start the next
client in the dependency chain and where one of these clients frees a
core/thread. I think I described that very bad. _If_ done, this is
obviously better done inside JACK2 Scheduler then via other means, which
is why I am also very skeptical that this works by using multiple JACK
instances and if it works in very few "accidential" cases, I assume it
will not be very reliable.

> 
> but ... jack2 already runs multiple threads, and will activate clients on
> separate cores when appropriate. so JACK2 already does this, which is why
> those of us who have been around the block on this for quite some time are
> bit skeptical about jonathan's "experiment".

Yes, I understand that it does scheduling of clients of _one_ jack
cycle[*], but it does not interleave schedules of successive jack
cycles, which is what I intended to talk about.
I am not sure this is possible to implement within JACK, since it
requires calling every client twice in one schedule, probably in an
order, some clients may not expect.

> 
> i once considered submitting a paper to Usenix about JACK because if you
> forget about audio+MIDI entirely, it still represents a rather interesting
> case of user-space scheduling being *more* optimal than anything the kernel
> can do. In the 90's i worked a little on what was then a fairly radical
> kernel scheduling design ("scheduler activations") that attempted to get
> the benefits of both kernel scheduling and user space scheduling. in some
> respects, JACK (even JACK2 itself, which brings a whole additional layer of
> scheduling complexity that I had nothing to do with) represents some of we
> learned during the experiments with SA on a real OS, even as it points out
> the desirability of a new system call that hands the processor to another
> thread directly rather than relying on intermediate objects like futexes.

So some kind if cooperative threading model? Interesting... Are the
results accessible somewhere?

> 
> 

I do think, that scheduling clients inside JACK is the better approach
than handing this off to a generic scheduler in the kernel, so yes, I
agree, i guess.

> 
>> The right way to to this, would be to implement this in the sound
>> server,
> 
> 
> jack (1 or 2) *is* the sound server.

Sorry for not being exact with my language, I did not intend to state
otherwise. I was just trying to be more generic.

> there is no kernel sound server. and
> if there was, there's no evidence it could any better job than jack does,
> other than doing direct handoff from thread to thread without relying on
> futex/fifo wakeups.
> 

Yes. That's why the scheduling belongs into the JACK server.


[*] Maybe there is some language barrier here, I was using "client
cycle" synonymous to the callback of a client being called once. "jack
cycle" would be a cycle of running all clients in the graph once, in
topological order.
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user



[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux