Re: Adding multiple OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Depending on how well you burn-in/test your new disks, I like to only add 1 failure domain of disks at a time in case you have bad disks that you're adding.  If you are confident that your disks aren't likely to fail during the backfilling, then you can go with more.  I just added 8 servers (16 OSDs each) to a cluster with 15 servers (16 OSDs each) all at the same time, but we spent 2 weeks testing the hardware before adding the new nodes to the cluster.

If you add 1 failure domain at a time, then any DoA disks in the new nodes will only be able to fail with 1 copy of your data instead of across multiple nodes.

On Mon, Dec 4, 2017 at 12:54 PM Karun Josy <karunjosy1@xxxxxxxxx> wrote:
Hi,

Is it recommended to add OSD disks one by one or can I add couple of disks at a time ?

Current cluster size is about 4 TB.



Karun 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux