Re: cosd multi-second stalls cause "wrongly marked me down"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 8 Apr 2011, Jim Schutt wrote:
> Hi Sage,
> 
> Sage Weil wrote:
> > On Wed, 16 Feb 2011, Jim Schutt wrote:
> > > On Wed, 2011-02-16 at 14:40 -0700, Gregory Farnum wrote:
> > > > On Wednesday, February 16, 2011 at 1:25 PM, Jim Schutt wrote:
> > > > > Hi,
> > > > > 
> > > > > I've been testing v0.24.3 w/ 64 clients against
> > > > > 1 mon, 1 mds, 96 osds. Under heavy write load I
> > > > > see:
> > > > >  [WRN] map e7 wrongly marked me down or wrong addr
> > > > > 
> > > > > I was able to sort through the logs and discover that when
> > > > > this happens I have large gaps (10 seconds or more) in osd
> > > > > heatbeat processing. In those heartbeat gaps I've discovered
> > > > > long periods (5-15 seconds) where an osd logs nothing, even
> > > > > though I am running with debug osd/filestore/journal = 20.
> > > > > 
> > > > > Is this a known issue?
> > > > You're running on btrfs? 
> > > Yep.
> > 
> > Are the cosd log files on the same btrfs volume as the btrfs data, or
> > elsewhere?  The heartbeat thread takes some pains to avoid any locks that
> > may be contented and do avoid any disk io, so in theory a btrfs stall
> > shouldn't affect anything.  We may have missed something.. do you have a log
> > showing this in action?
> 
> In the end, after all the various things I've tried, I think
> that the root cause of this is relatively simple: I don't
> have enough CPU cycles available on my servers to do the
> amount of OSD processing required to service my client
> load, given the number of OSDs per server I'm running.
> 
> With too much work and not enough cycles to do it, the
> one real-time component of Ceph, heartbeat processing,
> eventually must miss its deadline (no heartbeat "observed"
> in osd_heartbeat_grace seconds), since it requires work
> done by components (messengers, memory allocation system)
> that don't provide real-time guarantees.
> 
> All of my experiences on this make perfect sense when
> viewed from this perspective.
> 
> For example, when working with tcmalloc, I learned I
> could compile it with CXXFLAGS=-DTCMALLOC_LARGE_PAGES,
> which causes tcmalloc to allow objects up to 256k in
> its thread caches, rather than the default 32k.  So I
> used that in combination with a 256k stripe width, on
> the theory that deallocating messages would mostly only
> interact with the thread cache, but it didn't help.
> 
> When looking at thread stacks generated by my
> Mutex::LockOrAbort trick with a 5 sec wait to acquire
> the pipe_lock, I often saw threads waiting on the
> DoutLocker mutex.  Since lots of Ceph debugging output
> happens with other locks being held, debugging might
> thus slow things down out of proportion to the processing
> required to generate the log messages.  Yet, when I
> configured no debugging, I saw no improvement; it might
> be that things got a little worse.  This now makes sense
> to me in light of my above hypothesis about not enough
> available CPU cycles - there's still too much work to
> do, even with no cycles spent on debugging output.
> 
> What I didn't see very often in my thread stacks were
> stack frames from tcmalloc.  This doesn't make sense
> to me if the memory allocation subsystem is the root
> cause of my problem, but makes perfect sense if there's
> not enough CPU cycles: not so much time is spent
> deallocating memory, so it is caught in the act less
> often by LockOrAbort.
> 
> What finally seemed to help at avoiding missed heartbeats
> in my configuration was the following combination:
> turning off debugging, running with these throttling paramters:
>         osd client message size cap = 14000000
>         client oc size =              14000000
>         client oc max dirty =         35000000
>         filestore queue max bytes =   35000000
>         journal queue max bytes =     35000000
>         ms dispatch throttle bytes =  14000000
>         objector inflight op bytes =  35000000
> and using a 512k stripe width.
> 
> Evidently keeping a relatively small amount of data in
> flight, in smaller chunks, allowed heartbeat processing to
> hit its mark more often.  But, it only delayed things, it
> didn't solve the problem.  This makes sense to me if the
> root cause is that I don't have enough CPU cycles available
> per OSD, because I didn't change the offered load.
> 
> So, in the short term I guess I need to run fewer cosd
> instances per server.

There is one other thing to look at, and that's the number of threads used 
by each cosd process.  Have you tried setting

	osd op threads = 1

(or even 0, although I haven't tested that recently).  That will limit the 
number of concurrent IOs in flight to the fs.  Setting it to 0 will avoid 
using a thread pool at all and will process the IO in the message dispatch 
thread (though we haven't tested that recently so there may be issues).

I would also be interested in seeing a system level profile (oprofile?) to 
see where CPU time is being spent.  There are likely low hanging fruit in 
the OSD that would reduce CPU overhead.

I guess the other thing that would help to confirm this is to just halve 
the number of OSDs on your machines in a test and see if the problem goes 
away.

> If my analysis above is correct, do you think anything
> can be gained by running the heartbeat and heartbeat
> dispatcher threads as SCHED_RR threads?  Since tick() runs
> heartbeat_check(), that would also need to be SCHED_RR,
> or the heartbeats could arrive on time, but not checked
> until it was too late.

That sounds worth trying.  I don't care much about the tick() thread, 
though... if the machine is loady and we can't check heartbeats that is at 
least fail-safe.  And hopefully other nodes are able to catch the slow 
guy.

In the meantime, it may also be prudent for us to lower our queue size 
thresholds.  The current numbers were all pulled out of a hat (100MB? 
Sure!).

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux