On 04/28/17 09:24, Peter Maloney wrote: > On 04/20/17 20:58, Peter Maloney wrote: >> On 04/20/17 18:19, Sage Weil wrote: >>> but I guess the >>> underlying first question is whether any snap deletion happened anywhere >>> around this period (2560+ sec before the warning, or around the time the >>> op was sent in epoch 83264). >> A snapshot based backup thing ran at 12:00 CEST and took until 18:25 >> CEST to finish, which overlaps that, and creates and removes 120 >> snapshots spread throughout the process. >>> (And yeah, removed_snaps is the field that >>> matters here!) >>> >>> Thanks! >>> sage > This still happens in 10.2.7 > >> 2017-04-28 04:41:59.343443 osd.9 10.3.0.132:6808/2704 18 : cluster >> [WRN] slow request 10.040822 seconds old, received at 2017-04-28 >> 04:41:49.302552: replica scrub(pg: >> 4.145,from:0'0,to:93267'6832180,epoch:93267,start:4:a2d2c99e:::rbd_data.4bf687238e1f29.000000000000f7a3:0,end:4:a2d2dcd6:::rbd_data.46820b238e1f29.000000000000bfbc:f25e,chunky:1,deep:0,seed:4294967295,version:6) >> currently reached_pg >> ... >> 2017-04-28 06:07:09.975902 osd.9 10.3.0.132:6808/2704 36 : cluster >> [WRN] slow request 5120.673291 seconds old, received at 2017-04-28 >> 04:41:49.302552: replica scrub(pg: 4.145,from:0'0,to:93 >> 267'6832180,epoch:93267,start:4:a2d2c99e:::rbd_data.4bf687238e1f29.000000000000f7a3:0,end:4:a2d2dcd6:::rbd_data.46820b238e1f29.000000000000bfbc:f25e,chunky:1,deep:0,seed:4294967295,version: > and there are snaps created and removed around that time. So I changed some settings a long time ago for unrelated reasons, and now it's far more rare (only happened once since, but had many more than 1 request blocked). Here are the old settings: > osd deep scrub stride = 524288 # 512 KiB > osd scrub chunk min = 1 > osd scrub chunk max = 1 > osd scrub sleep = 0.5 And the new: > osd deep scrub stride = 4194304 # 4 MiB > osd scrub chunk min = 20 > osd scrub chunk max = 25 > osd scrub sleep = 4 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html