Our ceph system performs very poorly or not even at all while the
remapping procedure is underway. We are using replica 2 and the
following ceph tweaks while it is in process:
1013 ceph tell osd.* injectargs '--osd-recovery-max-active 20'
1014 ceph tell osd.* injectargs '--osd-recovery-threads 20'
1015 ceph tell osd.* injectargs '--osd-max-backfills 20'
1016 ceph -w
1017 ceph osd set noscrub
1018 ceph osd set nodeep-scrub
After the remapping finishes, we set these back to default.
Are any of these causing our problems or is there another way to limit
the impact of the remapping so that users do not think the system is
down while we add more storage?
thanks,
Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com