Best way to add OSDs - whole node or one by one?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am currently in the process of expanding my Nautilus cluster from 3 nodes (combined OSD/MGR/MON/MDS) to 6 OSD nodes and 3 management nodes.  The old and new OSD nodes all have 8 x 12TB HDDs plus NVMe.   The front and back networks are 10GB.

Last Friday evening I injected a whole new OSD node, increasing the OSD HDDs from 24 to 32.  As of this morning the cluster is still re-balancing - with periodic warnings about degraded PGs and missed deep-scrub deadlines.   So after 4.5 days my misplaced PGs are down from 33% to 2%.

My question:  For a cluster of this size, what is the best-practice procedure for adding OSDs?  Should I use 'ceph-volume prepare' to layout the new OSDs, but only add them a couple at a time, or should I continue adding whole nodes?

Maybe this has to do with a maximum percentage of misplaced PGs. The first new node increased the OSD capacity by 33% and resulted in 33% PG misplacement.  The next node will only result in 25% misplacement.  If a too high percentage of misplaced PGs negatively impacts rebalancing or data availability, what is a reasonable ceiling for this percentage?

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
607-760-2328 (Cell)
607-777-4641 (Office)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux