Re: xio-rados-firefly branch update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

I forgot to finish a couple of thoughts, filling them in...

----- "Matt W. Benjamin" <matt@xxxxxxxxxxxx> wrote:

> > > 2. about the completion mechanisms--with the addition of
> claim_data,
> > this
> > > is provisionally complete
> > 
> > Can you talk about this a bit in general before I comment? I want
> to
> > make sure I understand what's going on here, particularly with the
> > way
> > you're doing memory allocation in the mempools and handling the
> > refcounting.

I didn't actually talk about it :).  The mempools are used everywhere
to avoid allocation in XioMessenger fast paths (which is most of
them).  We arrived at a need for this via profiling and measurement,
of course.

The most complicated case is reclaiming memory for a Message and related
structures on the responder side.  We manage the memory together using 
a per-request pool, which is a member of the completion hook object.
(The backing store is the Xio pool facility.)  The decoded Message itself
(assuming we successfully decode) as well as each tracking Ceph buffer
(logic for which we re-wrote from last version, as noted earlier) holds
a reference on the hook object.  When the last of those refs is released
(the upper-layer code is done with Message and with any buffers which
it claimed) a cleamup process is started on the completion hook.  This
takes a new initial reference (yes, it's atomic) on the hook, which is
handed off to the XioPortal thread which originally delivered it, for
final cleanup.  That thread needs to be the one which returns xio_msg
buffers to Accelio.  Once that is done, the portal thread returns the
last reference to the completion hook, and it is recycled also.

> 
> Yes, and also, we have just pushed a major update to this logic on
> our
> current stabilization branch
> (xio-rados-exp-noreg-buf-qs-upstream-ow-mdw).
> The logic in the last version was overly complex.  We streamlined
> things,
> and also have switched to using the Xio one-way message paradigm,
> which
> as you mentioned a while back, is closer to Ceph's, and also has
> higher
> performance.
> 
> > 
> > Is "new (bp) xio_msg_buffer(m_hook, buf, len);” some syntax I’m
> > unaware of, or a merge/rebase issue? (Seriously, no idea. :)
> 
> This is C++ "placement new."

The placement new syntax is the way you substitute your own memory allocation
strategy for built-in new and delete (which typically use the system
allocator, of course).  Essentially, this step calls the desired constructor
on the already-allocated memory.  This comes up because we're using Accelio's
pool allocator.

>

-- 
Matt Benjamin
CohortFS, LLC.
206 South Fifth Ave. Suite 150
Ann Arbor, MI  48104

http://cohortfs.com

tel.  734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux