Nautilus Scrub and deep-Scrub execution order

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ceph-Users

after upgrading one of our clusters to Nautilus we noticed the x pgs not scrubbed/deep-scrubbed in time warnings.
Through some digging we found out that it seems like the scrubbing takes place at random and doesn't take the age of the last scrub/deep-scrub into consideration.
I dumped the time of the last scrub with a 90 min gap in between:
ceph pg dump | grep active | awk '{print $22}' | sort | uniq -c
dumped all
   2434 2020-08-30
   5935 2020-08-31
   1782 2020-09-01
      2 2020-09-02
      2 2020-09-03
      5 2020-09-06
      3 2020-09-08
      5 2020-09-09
     17 2020-09-10
    259 2020-09-12
  26672 2020-09-13
  12036 2020-09-14

dumped all
   2434 2020-08-30
   5933 2020-08-31
   1782 2020-09-01
      2 2020-09-02
      2 2020-09-03
      5 2020-09-06
      3 2020-09-08
      5 2020-09-09
     17 2020-09-10
     51 2020-09-12
  24862 2020-09-13
  14056 2020-09-14

It is pretty obvious that the PGs that have been scrubbed a day ago have been scrubbed again for some reason while ones that are 2 weeks old are basically left untouched.
One way we are currently dealing with this issue is setting the osd_scrub_min_interval to 72h to force the cluster to scrub the older PGs.
This can't be intentional.
Has anyone else seen this behavior?

Kind regards
Johannes
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux