Re: Cluster always scrubbing.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sean,
     Cut some part of slow request log, please find the attachment.
Thinking about set scrub interval more longer(once a month maybe).
An irrelevant question, do you think deep scrub will bring the performance down?




Best wishes,
Mika
 

2015-11-24 18:24 GMT+08:00 Sean Redmond <sean.redmond1@xxxxxxxxx>:
Hi,

That seems very odd - what do the logs say for the osds with slow requests?

Thanks

On Tue, Nov 24, 2015 at 2:20 AM, Mika c <mika.leaf666@xxxxxxxxx> wrote:
Hi Sean,
   Yes, the cluster scrubbing status(scrub + deep scrub) is almost two weeks.
   And the result of execute `ceph pg dump | grep scrub` is empty.
   But command of "ceph health" show there is "16 pgs active+clean+scrubbing+deep, 2 pgs active+clean+scrubbing".
   I have 2 osds have slow requests warning.
   Is it releated?



Best wishes,
Mika
 

2015-11-23 17:59 GMT+08:00 Sean Redmond <sean.redmond1@xxxxxxxxx>:
Hi Mika,

Have the scubs been running for a long time? Can you see what pool they are running on?  You can check using `ceph pg dump | grep scrub`

Thanks

On Mon, Nov 23, 2015 at 9:32 AM, Mika c <mika.leaf666@xxxxxxxxx> wrote:
Hi cephers,
 We are facing a scrub issue.   Our CEPH cluster is using Trusty / Hammer 0.94.1 and have almost 320 OSD disks on 10 nodes.
 And there are more than 30,000 PGs on cluster. 
 The cluster works fine until last week. We found the cluster health status start display "active+clean+scrubbing+deep".
 Some PGs scrub ok but next second another PG start to scrubbing (or deep scrubbing) everyday.
 We did not change the parameters of scrub. It should scrub once per day and deep scrub once per week.
​ Has anyone experiences with​ this incident?
​ ​


Best wishes,
Mika
 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





2015-11-25 03:39:23.101368 osd.238 192.168.11.3:6882/2469348 1334 : cluster [WRN] slow request 30.316965 seconds old, received at 2015-11-25 03:38:52.784331: osd_op(client.7203403.0:16611642 pre.7202536.151695__shadow_.3qppd3o1hCO7fiuQJD18AOvttvcb5S7_3 [writefull 0~0] 36.c65d1f05 ack+ondisk+write+known_if_redirected e95048) currently waiting for subops from 38,355
2015-11-25 03:39:23.101372 osd.238 192.168.11.3:6882/2469348 1335 : cluster [WRN] slow request 30.313927 seconds old, received at 2015-11-25 03:38:52.787369: osd_op(client.7203403.0:16611643 pre.7202536.151695__shadow_.3qppd3o1hCO7fiuQJD18AOvttvcb5S7_3 [writefull 0~524288] 36.c65d1f05 ack+ondisk+write+known_if_redirected e95048) currently waiting for subops from 38,355
2015-11-25 03:39:23.101375 osd.238 192.168.11.3:6882/2469348 1336 : cluster [WRN] slow request 30.311336 seconds old, received at 2015-11-25 03:38:52.789960: osd_op(client.7203403.0:16611644 pre.7202536.151695__shadow_.3qppd3o1hCO7fiuQJD18AOvttvcb5S7_3 [write 524288~524288] 36.c65d1f05 ack+ondisk+write+known_if_redirected e95048) currently waiting for subops from 38,355
2015-11-25 03:39:23.101379 osd.238 192.168.11.3:6882/2469348 1337 : cluster [WRN] slow request 30.307592 seconds old, received at 2015-11-25 03:38:52.793703: osd_op(client.7203403.0:16611645 pre.7202536.151695__shadow_.3qppd3o1hCO7fiuQJD18AOvttvcb5S7_3 [write 1048576~524288] 36.c65d1f05 ack+ondisk+write+known_if_redirected e95048) currently waiting for subops from 38,355
2015-11-25 03:40:46.289760 osd.384 192.168.12.2:6806/573231 1869 : cluster [WRN] 3 slow requests, 3 included below; oldest blocked for > 30.988835 secs
2015-11-25 03:40:46.289765 osd.384 192.168.12.2:6806/573231 1870 : cluster [WRN] slow request 30.988835 seconds old, received at 2015-11-25 03:40:15.300878: osd_op(client.8634664.0:13286759 pre.7202536.151695__shadow_.gqjbnZqLSwkjCfw0wI-I7-OO9fX_iPT_3 [writefull 0~524288] 36.371c0419 ack+ondisk+write+known_if_redirected e95048) currently waiting for subops from 33,315
2015-11-25 03:40:46.289769 osd.384 192.168.12.2:6806/573231 1871 : cluster [WRN] slow request 30.985285 seconds old, received at 2015-11-25 03:40:15.304428: osd_op(client.8634664.0:13286760 pre.7202536.151695__shadow_.gqjbnZqLSwkjCfw0wI-I7-OO9fX_iPT_3 [write 524288~524288] 36.371c0419 ack+ondisk+write+known_if_redirected e95048) currently waiting for subops from 33,315
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux