Experience with scrub tunings?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We want to limit the impact of (deep-)scrubs on PGs that contain a lot of OMAP data on our cluster. The osd_scrub_sleep helps to limit impact on cluster performance, but also dramatically slows down the scrubbing process obviously. So I'm looking for knobs to turn to safely increase scrubbing efficiency while limiting impact on client performance. There seem to be a few candidates:

- name: osd_scrub_max_preemptions
  type: uint
  level: advanced
desc: Set the maximum number of times we will preempt a deep scrub due to a client
    operation before blocking client IO to complete the scrub
  default: 5
  min: 0
  max: 30

I'm tempted to bump this to 30. This might slow down scrubs, but should allow them progress sooner or later.

- name: osd_deep_scrub_keys
  type: int
  level: advanced
  desc: Number of keys to read from an object at a time during deep scrub
  default: 1024
  with_legacy: true

Would increasing this value an order of magnitude (or more) help when reading a lot of OMAP data? I.e. is frequently requesting a batch of keys an expensive operation? Or would this be counter productive (spending more time checking the data, hence locking the operations for a longer period at a time)?

Are there other tunables that might reduce scrub impact for these kind of PGs (and that do not negatively impact PGs that do not contain OMAP data)?

Thanks,

Gr. Stefan
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux