[PATCH 00/11] Introduce memfd support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 09, 2016 at 04:09:57PM +0200, Ahmed S. Darwish wrote:
> On Fri, Jan 08, 2016 at 02:10:35PM +0100, David Henningsson wrote:
> > On 2016-01-02 21:04, Ahmed S. Darwish wrote:
> > >
> > > I'm having a problem in this part. By doing as said above, aren't we
> > > limiting the pstream connection to send memblocks only from a _single_
> > > memfd-backed pool?
> > >
> > > Imagine the following:
> > >
> > > // PA node 1
> > > pa_pstream_send_memblock(p, memblock1);    // memblock1 is inside pool1
> > > pa_pstream_send_memblock(p, memblock2);    // memblock2 is inside pool2
> > >
> > > If pool1 and pool2 are backed by different memfd regions, how would the
> > > above suggestion handle that case?
> > 
> > Hmm, to ask a counter question; why would you put them in different pools in
> > the first place? Why would you need more than one pool per pstream?
> >
> 
> I don't have a concrete use-case to answer this, but my understanding
> is that the two pa_pstream_send_memblock lines above will work as
> expected if 'pool1' and 'pool2' were backed by _different_ SHM files.
> 
> So when adding memfds as an alternative memory backend, it would be
> wise to _keep_ what is working still working .. unless there's a
> powerful reason not to.
>

OK, I retract what is said above. It seems that while the pstream
function signatures indeed seems to support multiple SHM files per
stream, pstream.c and memblock.c _implementations_ does not.

A case in point is pa_memimport_get:

 // Same block ID, different SHM files
 block1 = pa_memimport_get(import, block_id = 10, shm_id = 111111);
 block2 = pa_memimport_get(import, block_id = 10, shm_id = 222222);

The current implementation caches blocks by their ID __before__
caching them by SHM file IDs. So, in effect, we will get the same
memory block for 'block1' and 'block2' above even though they are
from completely different SHM files.

And since there's only one memimport per pstream, this means that
practically speaking, sending blocks from different SHM files over
the pipe will "work" only in random/undeterministic manner.

So given all of the above, and as originally advised, sending the
memfd fds at SHM negotation time makes the most sense :-)

Regards,

-- 
Darwish
http://darwish.chasingpointers.com


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux