Re: cosd multi-second stalls cause "wrongly marked me down"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sage Weil wrote:
On Fri, 8 Apr 2011, Jim Schutt wrote:
Hi Sage,

Sage Weil wrote:
On Wed, 16 Feb 2011, Jim Schutt wrote:
On Wed, 2011-02-16 at 14:40 -0700, Gregory Farnum wrote:
On Wednesday, February 16, 2011 at 1:25 PM, Jim Schutt wrote:
Hi,

I've been testing v0.24.3 w/ 64 clients against
1 mon, 1 mds, 96 osds. Under heavy write load I
see:
 [WRN] map e7 wrongly marked me down or wrong addr

I was able to sort through the logs and discover that when
this happens I have large gaps (10 seconds or more) in osd
heatbeat processing. In those heartbeat gaps I've discovered
long periods (5-15 seconds) where an osd logs nothing, even
though I am running with debug osd/filestore/journal = 20.

Is this a known issue?
You're running on btrfs?
Yep.
Are the cosd log files on the same btrfs volume as the btrfs data, or
elsewhere?  The heartbeat thread takes some pains to avoid any locks that
may be contented and do avoid any disk io, so in theory a btrfs stall
shouldn't affect anything.  We may have missed something.. do you have a log
showing this in action?
In the end, after all the various things I've tried, I think
that the root cause of this is relatively simple: I don't
have enough CPU cycles available on my servers to do the
amount of OSD processing required to service my client
load, given the number of OSDs per server I'm running.

With too much work and not enough cycles to do it, the
one real-time component of Ceph, heartbeat processing,
eventually must miss its deadline (no heartbeat "observed"
in osd_heartbeat_grace seconds), since it requires work
done by components (messengers, memory allocation system)
that don't provide real-time guarantees.

All of my experiences on this make perfect sense when
viewed from this perspective.

For example, when working with tcmalloc, I learned I
could compile it with CXXFLAGS=-DTCMALLOC_LARGE_PAGES,
which causes tcmalloc to allow objects up to 256k in
its thread caches, rather than the default 32k.  So I
used that in combination with a 256k stripe width, on
the theory that deallocating messages would mostly only
interact with the thread cache, but it didn't help.

When looking at thread stacks generated by my
Mutex::LockOrAbort trick with a 5 sec wait to acquire
the pipe_lock, I often saw threads waiting on the
DoutLocker mutex.  Since lots of Ceph debugging output
happens with other locks being held, debugging might
thus slow things down out of proportion to the processing
required to generate the log messages.  Yet, when I
configured no debugging, I saw no improvement; it might
be that things got a little worse.  This now makes sense
to me in light of my above hypothesis about not enough
available CPU cycles - there's still too much work to
do, even with no cycles spent on debugging output.

What I didn't see very often in my thread stacks were
stack frames from tcmalloc.  This doesn't make sense
to me if the memory allocation subsystem is the root
cause of my problem, but makes perfect sense if there's
not enough CPU cycles: not so much time is spent
deallocating memory, so it is caught in the act less
often by LockOrAbort.

What finally seemed to help at avoiding missed heartbeats
in my configuration was the following combination:
turning off debugging, running with these throttling paramters:
        osd client message size cap = 14000000
        client oc size =              14000000
        client oc max dirty =         35000000
        filestore queue max bytes =   35000000
        journal queue max bytes =     35000000
        ms dispatch throttle bytes =  14000000
        objector inflight op bytes =  35000000
and using a 512k stripe width.

Evidently keeping a relatively small amount of data in
flight, in smaller chunks, allowed heartbeat processing to
hit its mark more often.  But, it only delayed things, it
didn't solve the problem.  This makes sense to me if the
root cause is that I don't have enough CPU cycles available
per OSD, because I didn't change the offered load.

So, in the short term I guess I need to run fewer cosd
instances per server.

There is one other thing to look at, and that's the number of threads used by each cosd process. Have you tried setting

	osd op threads = 1

(or even 0, although I haven't tested that recently). That will limit the number of concurrent IOs in flight to the fs. Setting it to 0 will avoid using a thread pool at all and will process the IO in the message dispatch thread (though we haven't tested that recently so there may be issues).

I'll try this 2nd, since it's easy.


I would also be interested in seeing a system level profile (oprofile?) to see where CPU time is being spent. There are likely low hanging fruit in the OSD that would reduce CPU overhead.

This will take me a little while, since I need to learn
about the tools.  But since I need to learn about them
anyway, that's a good thing.


I guess the other thing that would help to confirm this is to just halve the number of OSDs on your machines in a test and see if the problem goes away.

I was going to try this first, exactly because it seems like
a definitive test.


If my analysis above is correct, do you think anything
can be gained by running the heartbeat and heartbeat
dispatcher threads as SCHED_RR threads?  Since tick() runs
heartbeat_check(), that would also need to be SCHED_RR,
or the heartbeats could arrive on time, but not checked
until it was too late.

That sounds worth trying. I don't care much about the tick() thread, though... if the machine is loady and we can't check heartbeats that is at least fail-safe. And hopefully other nodes are able to catch the slow guy.

The reason I mentioned tick(), though, is that I was hoping to avoid
causing extra work checking PGs (and maybe starting on reduplication,
depending on how long things drag on?), during the interval until an
OSD notices that it was marked down.  I.e. I want to avoid triggering
a cascade of erroneous failure diagnoses, because I worry about what
happens if there's a real hardware failure in that window.  E.g., I
don't yet understand all the algorithms enough to predict what will
happen if the OSD holding the primary copy of some object is
erroneously marked down, and then a disk for the OSD holding
the replica fails.  It seems like there has to be a window there
until the primary OSD is marked up again where read requests on
such objects cannot complete, assuming 2-way replication?


In the meantime, it may also be prudent for us to lower our queue size thresholds. The current numbers were all pulled out of a hat (100MB? Sure!).

:)

FWIW, I wanted to check my reasoning for "ms dispatch throttle bytes"
with you.  I thought I found in the code it was used in the rep op
dispatcher thread, so since I am doing 2-way replication, on average
every OSD should see one rep op for each client write op it sees.
According to that theory, I should set
  (ms dispatch throttle bytes) = (replication level - 1)*(osd client message size cap)
Does that sound right to you?

Thanks -- Jim


sage




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux