[PATCH v2 09/12] pulsecore: pstream: Support memfd blocks transport

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2016-02-20 01:19, Ahmed S. Darwish wrote:
> Hi!
>
> On Tue, Feb 16, 2016 at 03:58:02PM +0100, David Henningsson wrote:
>>
>> On 2016-02-12 01:19, Ahmed S. Darwish wrote:
>>> Now that we have the necessary infrastructure for memexporting and
>>> mempimporting a memfd memblock, extend that support higher up in the
>>> chain with pstreams.
>>>
>>> A PulseAudio endpoint can now _transparently_ send a memfd memblock
>>> to the other end by simply calling pa_pstream_send_memblock().
>>
>> ...provided the memfds are first registered.
>>
>
> Yup, will add that.
>
> ...
>>> @@ -92,6 +96,9 @@ struct item_info {
>>>
>>>       /* packet info */
>>>       pa_packet *packet;
>>> +    bool shmid_to_memfd_packet:1;
>>
>> This seems a bit strange to me. When you set up a srbchannel, there is a
>> special packet sent (PA_COMMAND_ENABLE_SRBCHANNEL) which has two fds as
>> ancil data.
>>
>> Would it not make sense (and lead to less code changes, probably) if this
>> followed a similar scheme, i e, add a new command (PA_COMMAND_SHMID_FD or
>> something) that would have an memfd as ancil data and the ID as packet
>> content?
>>
>
> Hmm, I guess this should've been further commented.
>
> Let's first agree to the following:
>
> - to send a command like PA_COMMAND_ENABLE_SRBCHANNEL, a
>    tagstruct is built and sent over the pipe using
>    pa_pstream_send_tagstruct_with_fds()
>
>    [ protocol_native.c::setup_srbchannel() ]
>
> - pa_pstream_send_tagstruct_with_fds() transform the tagstruct to
>    an opaque packet data using a simple memcpy(), then the packet
>    with its ancillary is sent over the domain socket.
>
>    [ pstream-util.c::pa_pstream_send_tagstruct_with_fds() ==>
>      pstream-util.c::pa_pstream_send_tagstruct_with_ancil_data()
>      ==> pstream.c::pa_pstream_send_packet() /* event deferred */
>      ==> pstream.c::do_write()               /* packet write */
>      ==> iochannel.c::pa_iochannel_write_with_fds()
>      ==> POSIX sendmsg()! ]
>
> So, _on the wire_, both srbchannel enablement and SHMID<->memfd
> registration mechanisms are actually the same: the latter just
> uses pa_pstream_send_packet() with extra little modifications.
>
> Given the above, why not create a PA_COMMAND_SHMID_FD and save us
> some code?
>
> Well, the reason is that we want to close the memfd fd directly
> after sending the packet and its ancillary using POSIX sendmsg().
> pa_pstream_send_tagstruct_**() does not offer us any ability to
> do that :-( It _opaques the message_ then defers the write event
> over the pstream.
>
> Due to such opaqueness, after a succesful pstream do_write() I
> cannot just say: the packet written was a PA_COMMAND_SMID_FD, so
> let's close its associated fds.

It seems quite straight forward just to add another parameter "bool 
close_fds" to pa_pstream_send_tagstruct_with_fds and struct 
pa_cmsg_ancil_data, and in pstream.c::do_write, right after the call to 
pa_iochannel_write_with_fds, you close the fds if the parameter is set?

That way you don't need any "cleanup hook" either.

> An argument could then be stated: why not wait for the other PA
> endpoint to reply/ACK the command and then close the fds? Well,
> this would open us to malicous clients .. leading to an fd leak
> per each malicious client connection, and thus bringing the whole
> server down after a while.
>
> ==
>
> Thus the patch, actually simple, solution was created. Let's mark
> the SHM_ID<->memfd registration packet with a special color. Now
> in pstream.c::do_write() I can see that this is a registration
> packet and call up a simple cleanup hook after write(). The hook
> does all the necessary validations and then close the fds.
>
> This way, the fds are always safely closed on the sending side.
>
> [ After writing the above, I've just discovered a bug! If sendmsg
>    fails, the cleanup hook should also be called but it is not.
>    Will fix that ]
>
>>
>> As a side note, we could potentially optimise setting up an srbchannel by
>> sending all three memfds in PA_COMMAND_ENABLE_SRBCHANNEL, it'll be one less
>> package to send...
>>
>
> I'm really not a big fan of this.. It  will hardcode both the
> number, _and nature_, of mempools in the Pulse communication
> protocol itslef. Are you sure we want to do that?

Hey, it was just an idea :-)

Indeed that would mean some additional code, probably not worth it at 
this point. There are just these really quick clients, e g if you do 
"pactl list sinks" or something, and we're adding package after package 
just to set the connection up. But thinking about it, this would 
probably be better fixed if, e g, "pactl" could just disable SHM in the 
first place.

>
> So in the future, if we merge the per-client srbchannel mempool
> with the envisioned per-client (earlier global) mempool, we will
> have to change the protocol semantics. In the far future if we
> reached the optimum case of a single mempool per client (not 3),
> we'll also change the protocol semantics.
>
> And in the present, we will have to insert lots of if conditions
> and such if the client does not support memfds. Moreover, the fd
> sending will be bidirectional. Three fds from the server if memfd
> are supported [1 srbchannel, 1 srbchannel mempool, 1 global
> mempool], and one fd from the client [audio data mempool]. And if
> memfds are not supported, then 1 fd from the server and zero from
> the client. This really complicates the protocol _a lot_ :-(
>
> The current situation does not have this limitation at all.. Any
> endpoint can create a shared mempool and send memblock references
> to the other end. This preserves the dynamicism inherent in the
> protocol and does not artificially limit it further.

-- 
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux