Re: stalls caused by scrub on jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Sage, Sam,

We're impacted by this bug (case 01725311). Our cluster is running RHCS 2.0 and is no more capable to scrub neither deep-scrub.

[1] http://tracker.ceph.com/issues/17859
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1394007
[3] https://github.com/ceph/ceph/pull/11898

I'm worried we'll have to live with a cluster that can't scrub/deep-scrub until March 2017 (ETA for RHCS 2.2 running Jewel 10.2.4).

Can we have this fix any sooner ?

Regards

Frédéric.

Le 15/11/2016 à 23:35, Sage Weil a écrit :
Hi everyone,

There was a regression in jewel that can trigger long OSD stalls during
scrub.  How long the stalls are depends on how many objects are in your
PGs, how fast your storage device is, and what is cached, but in at least
one case they were long enough that the OSD internal heartbeat check
failed and it committed suicide (120 seconds).

The workaround for now is to simply

  ceph osd set noscrub

as the bug is only triggered by scrub.  A fix is being tested and will be
available shortly.

If you've seen any kind of weird latencies or slow requests on jewel, I
suggest setting noscrub and seeing if they go away!

The tracker bug is

  http://tracker.ceph.com/issues/17859

Big thanks to Yoann Moulin for helping track this down!

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux