Re: Adding additional disks to the production cluster without performance impacts on the existing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

the "osd_recovery_sleep_hdd/ssd" options are way better to fine-tune the impact of a backfill operation in this case.

Paul

2018-06-07 20:55 GMT+02:00 David Turner <drakonstein@xxxxxxxxx>:
A recommendation for adding disks with minimal impact is to add them with a crush weight of 0 (configurable in the ceph.conf file and then increasing their weight in small increments until you get to the desired OSD weight.  That way you're never moving too much data at once and can stop at any time.

If you don't want to be quite this paranoid, you can just manage the osd_max_backfill settings and call it a day while letting the OSDs add to their full weight from the start.  It all depends on your client IO needs, how much data you have, speed of disks/network, etc.

On Wed, Jun 6, 2018 at 3:09 AM John Molefe <John.Molefe@xxxxxxxxx> wrote:
Hi everyone

We have completed all phases and the only remaining part is just adding the disks to the current cluster but i am afraid of impacting performance as it is on production.
Any guides and advices on how this can be achieved with least impact on production??

Thanks in advance
John

Vrywaringsklousule / Disclaimer: http://www.nwu.ac.za/it/gov-man/disclaimer.html

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux