[PATCH 02/11] pulsecore: srbchannel: Introduce per-client SHM files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tanu,

Sorry for the very late replies :-)

On Tue, Sep 22, 2015 at 01:01:24PM +0300, Tanu Kaskinen wrote:
> On Sun, 2015-09-20 at 23:28 +0200, Ahmed S. Darwish wrote:
> > The PA daemon currently uses a server-wide SHM file for all clients
> > sending and receiving commands using the srbchannel.
> > 
> > To protect the clients from each other, and to provide the necessary
> > groundwork later for policy and sandboxing, create the srbchannel SHM
> > files on a per-client basis.
> > 
> > Signed-off-by: Ahmed S. Darwish <darwish.07 at gmail.com>
> > ---
> >  src/pulsecore/client.c          |  5 +++++
> >  src/pulsecore/client.h          |  7 +++++++
> >  src/pulsecore/core.c            | 15 +++++----------
> >  src/pulsecore/core.h            |  6 ++----
> >  src/pulsecore/protocol-native.c | 16 ++++++++++++----
> >  5 files changed, 31 insertions(+), 18 deletions(-)
> > 
> > diff --git a/src/pulsecore/client.c b/src/pulsecore/client.c
> > index 003bcf8..5d60ad6 100644
> > --- a/src/pulsecore/client.c
> > +++ b/src/pulsecore/client.c
> > @@ -69,6 +69,8 @@ pa_client *pa_client_new(pa_core *core,
> > pa_client_new_data *data) {
> >      c->sink_inputs = pa_idxset_new(NULL, NULL);
> >      c->source_outputs = pa_idxset_new(NULL, NULL);
> >  
> > +    c->rw_mempool = NULL;
> 
> You could a patch that changes
> 
>     c = pa_xnew(pa_client, 1);
> 
> to
> 
>     c = pa_xnew0(pa_client, 1);
> 
> so that the struct fields don't need to be initialized to zero
> manually.
>

Now done by David's 0284363aa1da05 commit :-)

> ...hmmm... Is there actually any good reason to have rw_mempool in
> pa_client? Could it be moved to the native protocol? It looks like the
> only user outside the native protocol is pa_core_maybe_vacuum().
>

ACK.

Now changed in v2: the per-client mempool (rw_mempool) is created on
the server side (inside pa_native_connection) when the server receives
a new client connection request at pa_native_protocol_connect().

This indeed goes better with the global pattern of all mempools being
created on the server side.

> If I understand vacuuming correctly, it means telling the kernel that
> the memory in unused memblocks probably won't be used any time soon,
> and that we also don't care about the current data in them, so the
> kernel can optimize memory usage accordingly. pa_core_maybe_vacuum()
> only vacuums when there's no audio moving in the system.

That's what I understood from the code, yes.

> When you
> change rw_mempools to be created separately for each client, I think it
> would make sense to vacuum the per-client mempool every time the client
> stops streaming. That logic could live inside the native protocol, so
> the core wouldn't have to know about rw_mempools.
>

I see, makes sense.

I've implemented this by vacuuming upon receiving CORK_PLAYBACK_STREAM
or CORK_RECORD_STREAM. Hopefully this is the right approach.

Thanks,

-- 
Darwish
http://darwish.chasingpointers.com


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux