[PATCH 00/11] Introduce memfd support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2016-01-02 21:04, Ahmed S. Darwish wrote:
> Hi!

Hi! Long time no see :-)

>
> On Mon, Sep 28, 2015 at 10:47:09AM +0200, David Henningsson wrote:
>> 1)
>>
>> There was some discussion about how the pa_mem struct should look like. I
>> think it should just look like this:
>>
>> struct pa_mem {
>>    void *ptr;
>>    size_t size;
>>    pa_mem_type_t type;
>>    int fd;
>>    unsigned id; /* both shm and memfd need an id, see below */
>>    bool shm_do_unlink;
>> }
>>
>> Memfd and shm can then share some methods because they're both fd backed (e
>> g, closing is to call munmap and then fclose for both).
>>
>> Plus, it's all much simpler than anonymous unions and structs, IMO.
>>
>
> Indeed. After finishing doing so now in v2, a lot of code duplication
> between posix SHM and memfds has been removed and the patch series is
> now much smaller. Thanks a lot.
>
>>
>> 2)
>>
>> The second big thing that makes me confused is how the memfd is initially
>> set up and sent over the pipe to the client. Am I reading your code wrong,
>> or do you actually send an extra memfd packet with the fd in *for every
>> packet* sent?
>>
>
> You're reading the code right. Definitely a problem to be fixed.
>
>> The per-client memfd should be set up by the server at SHM negotiation time
>> (and sealed, but you've said that's a future patch set). Then send a packet
>> with the memfd as ancil data (or extend PA_COMMAND_ENABLE_SRBCHANNEL, if
>> that's simpler). This packet should also have an ID that identifies the
>> memfd.
>
> I'm having a problem in this part. By doing as said above, aren't we
> limiting the pstream connection to send memblocks only from a _single_
> memfd-backed pool?
>
> Imagine the following:
>
> // PA node 1
> pa_pstream_send_memblock(p, memblock1);    // memblock1 is inside pool1
> pa_pstream_send_memblock(p, memblock2);    // memblock2 is inside pool2
>
> If pool1 and pool2 are backed by different memfd regions, how would the
> above suggestion handle that case?

Hmm, to ask a counter question; why would you put them in different 
pools in the first place? Why would you need more than one pool per pstream?

>> Every memfd and shm now have a unique ID, so you can just put that ID when
>> you do the memexport, so on the sender side you don't need to distinguish
>> between the two types.
>> And on the memimport side you'd look this ID up, and see if it matches a
>> previously received memfd, if not, it's a posix-shm ID that you need to call
>> pa_shm_attach on.
>>
>> Does that make sense to you?
>>
>
> Ah, definitely agree. This is much simpler.
>
> If my claim above is valid though, I might just implement in a slightly
> different way:
>
> ``Instead of sending the memfd file descriptor at SHM negotiation
>    time, do it in pstream.c::prepare_next_write_item().
>
>    To avoid the problem of sending the memfd fd for every packet sent,
>    track the memfd-backed blocks earlier sent using their randomly
>    generated IDs.
>
>    If this is the first time we encounter this ID, send the memfd file
>    descriptor as ancil data with the packet. If this is not the first
>    encouter, just send the ID without any ancil data. The other end
>    is already tracking this ID and have a memfd socket opened for it
>    from earlier communication''
>
> What do you think?
>
> Thanks again, and happy holidays :)
>

-- 
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux