Re: Handling scrubbing/deep scrubbing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kamil,

We got a similar setup, and thats our config:

  osd                                   advanced osd_max_scrubs                        1
  osd                                   advanced osd_recovery_max_active               4
  osd                                   advanced osd_recovery_max_single_start         1
  osd                                   advanced osd_recovery_sleep                    0.000000
  osd                                   advanced osd_scrub_auto_repair                 true
  osd                                   advanced osd_scrub_begin_hour                  18
  osd                                   advanced osd_scrub_end_hour                    6
  osd                                   advanced osd_scrub_invalid_stats               true


Our scrub start at 18:00 PM and finish at 6:00 PM, is enough and the first hours of each day system is ready and dont get any performance panic due scrubs.

Implemented since 1 yr and no issue with scrubs.

We use ceph config set for mantain this setting in quorum.

Currently of cluster is S3 for main use.

Regards
Manuel


-----Mensaje original-----
De: Kamil Szczygieł <kamil@xxxxxxxxxxxx> 
Enviado el: lunes, 25 de mayo de 2020 9:48
Para: ceph-users@xxxxxxx
Asunto:  Handling scrubbing/deep scrubbing

Hi,

I've 4 node cluster with 13x15TB 7.2k OSDs each and around 300TB data inside. I'm having issues with deep scrub/scrub not being done in time, any tips to handle these operations with large disks like this?

osd pool default size = 2
osd deep scrub interval = 2592000
osd scrub begin hour = 23
osd scrub end hour = 5
osd scrub sleep = 0.1

Cheers,
Kamil
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux