Re: adding osd node best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



1) That's an awful lot of mons.  Are they VM's or something?  My sense is that mons >5 have diminishing returns at best.  

2) Only two OSD nodes?  Assume you aren't running 3 copies of data or racks.  

3) The new nodes will have fewer OSD's?   Be careful with host / OSD weighting to avoid a gross imbalance in disk utilization.   

4) I've had experience tripling the size of a cluster in one shot, and in backfilling a whole rack of 100+ OSD's in one shot.  Cf.  BÖC's 'Veteran of the Psychic Wars'.  I do not recommend this approach esp.  if you don't have truly embarrassing amounts of RAM.  Suggest disabling scrubs / deep-scrubs, throttling the usual backfill / recovery values, including setting recovery op priority as low as 1 for the duration.   Deploy one OSD at a time.  Yes this will cause data to move more than once.  But it will also minimize your exposure to as-of-yet undiscovered problems with the new hardware, and the magnitude of peering storms.   And thus client impact.   One OSD on each new system, sequentially.   Check the weights in the CRUSH map.  Time backfill to HEALTH_OK.  Let them soak for a few days before serially deploying the rest.   

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux