On Mon, 2014-11-24 at 15:10 +0100, David Henningsson wrote: > > On 2014-11-24 14:18, Tanu Kaskinen wrote: > > On Mon, 2014-11-24 at 13:10 +0100, David Henningsson wrote: > >> > >> On 2014-11-24 12:04, Tanu Kaskinen wrote: > >>> On Mon, 2014-11-24 at 11:12 +0100, David Henningsson wrote: > >>>> > >>>> On 2014-11-20 04:44, Ricardo Salveti wrote: > >>>>> Hey, > >>>>> > >>>>> When investigating the issue we had on Ubuntu Touch, described by bug > >>>>> https://bugs.launchpad.net/ubuntu-rtm/+source/pulseaudio/+bug/1391230, > >>>>> I noticed that PulseAudio never releases the resources used by an > >>>>> active stream if the app gets a SIGSTOP, keeping pulse busy and > >>>>> consuming cpu until the app resumes or is closed by the user. > >>>>> > >>>>> On Ubuntu Touch that happens when the application is active playing > >>>>> audio/video, and the user moves back to the home scopes (Ubuntu Touch > >>>>> lifecycle will automatically send a SIGSTOP after 5 seconds). When > >>>>> checking that on my desktop, I also noticed that the same happens (by > >>>>> forcing a SIGSTOP against mplayer, for example). Pulse only releases > >>>>> the stream when the app pauses the stream, not necessarily when the > >>>>> app stops after receiving the signal. > >>>>> > >>>>> I raised this first with David to understand if it was indeed a valid > >>>>> use case, and he said that it was indeed something that it was > >>>>> probably never really considered > >>>>> (https://bugs.launchpad.net/ubuntu-rtm/+source/pulseaudio/+bug/1391230/comments/8). > >>>>> > >>>>> So before going and trying to deep dive and find a fix for the issue, > >>>>> I first wanted to understand from you guys if this is indeed a valid > >>>>> issue and what would be the best way to get this fixed. I know we're > >>>>> still using Pulse 4.0 on Ubuntu, but wanted to make sure to get > >>>>> something that would also be compatible with upstream. > >>>> > >>>> So when I discussed this with Ricardo, my suggestion was that we either > >>>> > >>>> 1) (ab)use the cork mechanism to stop/resume the client when we > >>>> discover that the client is SIGSTOPped, or > >>> > >>> This implies that we implement server-side corking, because currently > >>> the cork status is controlled by the client, and this "stalled" corking > >>> would be controlled by the server. > >>> > >>> Even though I have mentioned the "server-side corking" feature before > >>> (as something that we need and use in Tizen), I might not have explained > >>> how I think it should be implemented (Tizen's current implementation is > >>> less flexible than what I'd like). We should have a hashmap of "cork > >>> requests" in sink inputs and source outputs. The stream would be corked > >>> as long as the hashmap isn't empty, i.e. as long as someone thinks that > >>> the stream should be corked. The hashmap keys would be strings > >>> identifying the "corking reasons", so policy modules could easily add > >>> new reasons. The normal client-initiated corking would be one corking > >>> reason, and this stall detection would be another reason. Tizen's policy > >>> module would implement a third reason. > >> > >> +1 in general for this idea, to consolidate the "stall" concept with the > >> "server side cork" concept; the fewer concepts on the more use cases, > >> the better :-) > >> > >> I wonder what we should do with the "cork requests" though. I feel under > >> this scheme, we should deprecate them and replace them with a "you have > >> been (un)corked" message, possibly including the different reasons as > >> strings, so the client can determine whether the cork happened because > >> the client requested so, or if it was for some other reason. > > > > I'd vote for not removing the old cork request feature, because there > > are applications relying it. If the feature is seen as unnecessary for > > new/updated applications, I'm ok with marking it as deprecated in the > > documentation. > > > > As for sending some new kind of message like "you've been corked", I > > don't think that's necessary. If the introspection API allows pactl to > > show the cork status, including corking done by the server (possibly > > also including the reason string(s)), then the introspection API should > > be sufficient also for applications that need to get notified when their > > own stream gets corked by the server. > > And if the reasons change, that gives an event/notification on the stream? Yes, a subscription notification as usual for changed object attributes. > > Talking about the client API, there's the question that how should > > pa_sink_input_info.corked and pa_stream_is_corked(). For backwards > > compatibility reasons, I think at least pa_stream_is_corked() should > > return 1 only if the stream is corked by the application, and possibly > > it would be good to apply the same policy for pa_sink_input_info.corked > > too (but I think changing the pa_sink_input_info.corked semantics is > > less likely to cause breakage). > > > > Hmm... I realized there's another problem with stopped clients: not only > > do their streaming get stalled, but they also stop reading events that > > they may be subscribed to. The code looks like the daemon will buffer > > data until it runs out of memory, which seems like a bad idea... > > > > Let's assume that there's a simple application that just subscribes to > > events and does nothing else ("pactl subscribe" would fit the > > description). What to do when such application gets stopped? How do we > > notice that it's stopped, if it's not streaming? Should we send a ping > > message periodically? Or should we keep track of the amount of queued > > data, and if that reaches some threshold, then we consider that client > > stopped? Once we have declared the client unresponsive, should we > > disconnect it, or just drop subscription events until it becomes > > responsive again? In the latter case I think we should notify the > > application that events have been dropped. > > I think most apps would like to receive buffered events once they are > unstopped, so that should be the best for shorter stops. > > But I also think it would be better to disconnect the client in case of > overflow, rather than dropping events. The advanced clients should be > able to deal with disconnect/reconnect and would then re-read the > current state after reconnection. > > Or we could generalise this: by dealing with this on the > iochannel/srbchannel level, i e, we could disconnect the client in case > the pstream.send_queue gets too large? That should at least protect the > PulseAudio server from OOM, both for notifications and recording streams. > > If the send_queue has more than, say, 100 [1] messages, we start a timer > for 30 [1] seconds. If the send_queue is empty, we stop the timer. If > the timer expires, the client is disconnected. How about that? That sounds reasonable to me. -- Tanu