Re: jj's "improved" ceph balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Doesn't the existing mgr balancer already balance the PGs for each pool individually? So in your example, the PGs from the loaded pool will be balanced across all osds, as will the idle pool's PGs. So the net load is uniform, right?

If there’s a single CRUSH root and all pools share the same set of OSDs?  I suspect that what he’s getting at is if pools use different sets of OSDs, or (eek) live on partly overlapping sets of OSDs.


> OTOH I could see a workload/capacity imbalance if there are mixed capacity but equal performance devices (e.g. a cluster with 50% 6TB HDDs and 50% 12TB HDDs). 
> In that case we're probably better to treat the disks as uniform in size until the smaller osds fill up.

Primary affinity can help, with reads at least, but it’s a bit fussy.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux