RE: mempools everywhere

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here's a two-line patch to eliminate the memory leak of the mempool object itself :)

Mempool& GetPool(index) {
   If (table[index] == nullptr) {
      Static char storage[num_mempools][sizeof(mempool_t)];
      Table[index] = new ((void *)stroage[index]) mempool_t;
   }
   Return Table[index];
}


Volia, no leaks :) Makes the .so unload problem easier too :)

> -----Original Message-----
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
> owner@xxxxxxxxxxxxxxx] On Behalf Of Allen Samuels
> Sent: Tuesday, October 18, 2016 10:00 AM
> To: Sage Weil <sweil@xxxxxxxxxx>; ceph-devel@xxxxxxxxxxxxxxx
> Subject: RE: mempools everywhere
> 
> > -----Original Message-----
> > From: Sage Weil [mailto:sweil@xxxxxxxxxx]
> > Sent: Tuesday, October 18, 2016 9:52 AM
> > To: ceph-devel@xxxxxxxxxxxxxxx
> > Cc: Allen Samuels <Allen.Samuels@xxxxxxxxxxx>
> > Subject: mempools everywhere
> >
> > The mempool PR is more or less ready for final test/merge.  Once it's
> > in place we'd love to use this all over the place so that we can tell
> > where memory is going.  There's one major remaining roadblock, though:
> > right now mempool is part of libglobal since it has global state (the
> > mempools themselves).  This prevents us from putting anything in, say,
> > osd_types.h (which is included in client code) in a mempool.  Which
> > includes things like pg_log_t, a major consumer of memory.
> >
> > We have a few options...
> >
> > - Segregate types into different compilation units so that things that
> > are server-side only can be built separately (and put in mempools).
> >
> > - Just avoid these types :(
> >
> > - Use mempools for client code as well.
> 
> +1
> 
> >
> > The last one is the most appealing to me, but it's violating a big
> > no-no of shared libraries (static state!).  However, I think that
> > mempools may be one of those special things that all of the usual
> > reasons that global variables are bad don't really apply to.  The main
> > one being: if we have multiple users of, say, librados in the same
> > process, it is totally fine (and possibly desired?) that they would share the
> same mempools.
> >
> > What do you think?
> >
> > FWIW, there is one bit of potentially shady initialization going on
> > with the mempools.  Instead of statically defining the pools, we
> > instead declare an array of pool pointers, and rely on the compiler
> > and runtime environment to zero that for the process *before* any
> > constructors are run for any compilation unit.  This allows pools to
> > be allocated on-demand when needed by constructors in any compilation
> > unit, regardless of the link order relative to mempool.cc.  This
> > appears to be a safe thing to do (and avoids the very tedious process
> > of ensuring that mempool.cc is always linked first so that it's ctors run first).
> 
> I don't think is a cause for concern. There are hundreds of thousands of lines
> of existing code that are making the same assumption (global statics are set
> to zero unless overridden explicitly).
> 
> >
> > If we want to take the plunge and make this usable by librados and
> > other client code, I think it may just work, modulo leaking the pool_t
> > structures if the .so is ever unloaded (not sure any librados user
> > does this?).  In that case, we could probably move the setup/teardown
> > into the init/fini .so methods, although we'd need to confirm that
> > these would run
> > *before* any constructors for statically linked modules.
> >
> > Anyway.. thoughts?
> >
> > sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the
> body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at
> http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux