If you're scheduling them appropriately so that no deep scrubs will happen on their own, then you can just check the cluster status if any PGs are deep scrubbing at all. If you're only scheduling them for specific pools, then you can confirm which PGs are being deep scrubbed in a specific pool with `ceph pg dump | grep deep | grep ^${pool_num}\.`. Hope that helps.
On Fri, Sep 29, 2017 at 5:18 AM Stefan Kooman <stefan@xxxxxx> wrote:
Quoting Christian Balzer (chibi@xxxxxxx):
>
> On Thu, 28 Sep 2017 22:36:22 +0000 Gregory Farnum wrote:
>
> > Also, realize the deep scrub interval is a per-PG thing and (unfortunately)
> > the OSD doesn't use a global view of its PG deep scrub ages to try and
> > schedule them intelligently across that time. If you really want to try and
> > force this out, I believe a few sites have written scripts to do it by
> > turning off deep scrubs, forcing individual PGs to deep scrub at intervals,
> > and then enabling deep scrubs again.
> > -Greg
> >
> This approach works best and w/o surprises down the road if
> osd_scrub_interval_randomize_ratio is disabled.
> And the osd_scrub_start_hour and osd_scrub_end_hour set to your needs.
>
> I basically kick the deep scrubs off on a per OSD basis (one at a
> time and staggered of course) and if your cluster is small/fast enough
> that pattern will be retained indefinitely, with only one PG doing a deep
> scrub at any given time (with the default max scrub of 1 of course).
Is there a way to check if an osd is finished with the deep-scrub? So a
new manual deep-scrub command can be given for the next osd in line? Or
do you use a fixed time-interval?
Gr. Stefan
--
| BIT BV http://www.bit.nl/ Kamer van Koophandel 09090351
| GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com