Re: Logging braindump

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 22 Mar 2012, Tommi Virtanen wrote:
> >        If we are logging a lot, buffer management has the potential
> >        to become a bottle-neck ... so we need to be able to allocate
> >        a record of the required size from the circular buffer
> >        with atomic instructions (at least in non-wrap situations).
> >
> >        But if records are allocated and then filled, we have to
> >        consider how to handle the case where the filling is
> >        delayed, and the reader catches up with an incomplete
> >        log record (e.g. skip it, wait how long, ???).
> >
> >        And while we hope this will never happen, we have to deal
> >        with what happens when the writer catches up with the
> >        reader, or worse, an incomplete log block ... where we might
> >        have to determine whether or not the owner is deceased (making
> >        it safe to break his record lock) ... or should we simply take
> >        down the service at that point (on the assumption that something
> >        has gone very wrong).
> 
> The Disruptor design handles all these, is simple in the sense of
> "that's what I would have built", and looks good.

My only problem with the disrupter stuff was that, as i was reading it, it 
was very much like "yeah, given the limitations of Java, that's what you 
would do," but we're in a slightly different boat.  i.e., they use a 
ringbuffer of pointers to preallocated objects.

My guess is that the best bet would be preallocated Entry objects (either 
in a flat buffer or on the heap) with a preallocated per-entry buffer 
(say, 80 chars) that will spill over into something slow/simple when 
necessary.  And something disrupter-like to claim slots in the ringbuffer.

But in any case, I think the key is first measuring how much time we spend

 - rendering the current entries
 - queueing each entry

under varying levels of concurrency.  With the current code, for instance, 
I think most time is spent convering crap into strings and waiting for a 
blocking flush.  We aren't logging millions of items, only hundreds.

--

Ignoring the nitty gitty of the log queueing, though... does the basic 
framework make sense?  That is,

 - a set of predefined subsystems, each with their own log levels
 - a level to control which entries are gathered/rendered, with a fast 
   contitional check (in, say, dout macro)
 - a level to control which entries are logged d
 - a dump on crash (or other event) of everything we have

?

sage

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux