[PATCH v2 00/12] Introduce memfd support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 12, 2016 at 04:58:43PM +0100, David Henningsson wrote:
> 
> On 2016-02-12 16:04, Ahmed S. Darwish wrote:
...
> 
> Thanks for the explanations! This is a good summary.
>

anytime :-)

>
> >- We now have 3 mempools in the system: a global mempool, and 2
> >   per-client mempools. One created by the client for passing
> >   playback audio. One created by the server for srbchannels.
> >
> >- For any per-client memfd mempool, the file descriptors are
> >   instaneously closed after sending it to other side. The
> >   receiving end also instantaneously close all received file
> >   descriptors after doing an mmap().
> >
> >   Thus we have no risks of data leaks in that case. The 'secret'
> >   shared by two PA endpoints is directly discarded after use.
> >
> >- A special case is the global server-wide mempool. Its fd is
> >   kept open by the server, so whenever a new client connects, it
> >   passes it the fd for memfd-fd<->SHM-ID negotiation.
> >
> >   Even in this case, communication is then done using IDs and
> >   no further FDs are passed. The receiving end also does not
> >   distinguish between per-client and global mempools and directly
> >   close the fd after doing an mmap().
> >
> >- A question then arises: as was done with srbchannels, why not
> >   transform the global mempool to a per-client one?
> >
> >   This is planned, but is to be done in a follow-up project. The
> >   global mempool is touched *everywhere* in the system -- in
> >   quite different manners and usage scenarioes. It's also used
> >   by a huge set of modules, including quite esoteric ones.
> >
> >   Touching this will require extensive testing for each affected
> >   part. So this will be quite a HUGE patch series of it own,
> >   possibly done in 10 patches by 10 patches chunks.
> 
> Hmm. I'm thinking, to get the security without 100 patches first, we
> can start by not sharing the global mempool with the clients. That
> way, it will fallback to going over the socket. Which might mean an
> extra memcpy, even if that socket is an srbchannel. But still, it
> would be secure. Right?
>
> Then, we can work on cleaning the global mempool up, by fixing
> modules one by one, the commonly used ones (such as the ALSA source
> modules) first. Indeed now we'll have to copy memory to each
> source_output->client mempool instead of to just one global mempool,
> so that will be a change.
>

Excellent.. So from the above, we can deduce the following:

1- Let's finalize the memfd series, including transforming the
   global mempool to shared memfd.

2- Hopefully things can be ready by PA 9.0 and distributors can
   use a _pure_ memfd PA installation by default (no posix SHM).

   This will give us some good testing from the Arch, Debian
   unstable, Ubuntu testing, and Fedora rawhide folks

3- After finishing this series, let's kickoff the new one which
   transforms the global pool to an old school
   regular-data-copy-over-socket one.

4- Not to incur extra latency on PA users, that very same patch
   series should transform all the hot global mempool paths
   to a per-client shared one. This includes recording, where
   the global mempool is heavily used, etc.

   We can then discuss, in an IRC weekly, what are the highest
   priority candidates for per-client transformation.

Sounds like a plan? :D

Thanks,
Darwish

>
> >   But when it's done, we'll have all the necessary infrastructure
> >   to directly secure it.
> >
> >   For now, we can completely disable posix SHM and things should
> >   function as expected. This is a win.. yes, it's incomplete
> >   until we personalize the global mempool too, but it's still a
> >   step in the right direction. The memfd code path will also be
> >   heavily tested in the process.
> >
>


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux