A recommendation for adding disks with minimal impact is to add them with a crush weight of 0 (configurable in the ceph.conf file and then increasing their weight in small increments until you get to the desired OSD weight. That way you're never moving too much data at once and can stop at any time.
If you don't want to be quite this paranoid, you can just manage the osd_max_backfill settings and call it a day while letting the OSDs add to their full weight from the start. It all depends on your client IO needs, how much data you have, speed of disks/network, etc.
On Wed, Jun 6, 2018 at 3:09 AM John Molefe <John.Molefe@xxxxxxxxx> wrote:
_______________________________________________Hi everyoneWe have completed all phases and the only remaining part is just adding the disks to the current cluster but i am afraid of impacting performance as it is on production.Any guides and advices on how this can be achieved with least impact on production??Thanks in advanceJohn
Vrywaringsklousule / Disclaimer: http://www.nwu.ac.za/it/gov-man/disclaimer.html
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com