Hi Jean-Charles ,
I will validate this config in my laboratory and production, and share the results here.
Thanks.
Regards ,
Fabio Abreu
On Mon, Feb 18, 2019 at 3:18 PM Jean-Charles Lopez <jelopez@xxxxxxxxxx> wrote:
Hi Fabio,have a look here: https://github.com/ceph/ceph/blob/luminous/src/common/options.cc#L2355It’s designed to relieve the pressure generated by the recovery and backfill on both the drives and the network as it slows down these activities by introducing a sleep after these respectives ops.RegardsJCOn Feb 18, 2019, at 09:28, Fabio Abreu <fabioabreureis@xxxxxxxxx> wrote:_______________________________________________Hi Everybody !I finding configure my cluster to receives news disks and pgs and after configure the main standard configuration too, I look the parameter "osd sleep recovery" to implement in production environment but I find just sample doc about this config.Someone have experience with this parameter ?Only discussion in the internet about this :My main configuration to receive new osds in Jewel 10.2.7 cluster :Before include new nodes :$ ceph tell osd.* injectargs '--osd-max-backfills 2'
$ ceph tell osd.* injectargs '--osd-recovery-threads 1'
$ ceph tell osd.* injectargs '--osd-recovery-op-priority 2'
$ ceph tell osd.* injectargs '--osd-client-op-priority 63'
$ ceph tell osd.* injectargs '--osd-recovery-max-active 2'After include new nodes$ ceph tell osd.* injectargs '--osd-max-backfills 1'
$ ceph tell osd.* injectargs '--osd-recovery-threads 1'
$ ceph tell osd.* injectargs '--osd-recovery-op-priority 1'
$ ceph tell osd.* injectargs '--osd-client-op-priority 63'
$ ceph tell osd.* injectargs '--osd-recovery-max-active 1'Regards,
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com