Re: Balancing PGs across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/2/19 5:55 PM, Lars Täuber wrote:
Here we have a similar situation.
After adding some OSDs to the cluster the PGs are not equally distributed over the OSDs.

The balancing mode is set to upmap.
The docshttps://docs.ceph.com/docs/master/rados/operations/balancer/#modes  say:
"This CRUSH mode will optimize the placement of individual PGs in order to achieve a balanced distribution. In most cases, this distribution is “perfect,” which an equal number of PGs on each OSD (+/-1 PG, since they might not divide evenly)."

This is not the case here with our cluster. The number of PGs reaches from 157 up to 214. And so the resulting usage of the HDDs varies from 60% to 82%.
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.89/1.22  STDDEV: 7.47

In the meantime I had to use reweight-by-utilization to get the cluster in healthy state again, because it had a near full PG and near full OSDs.

The cluster is nautilus 14.2.4 on debian 10 with packages from croit.io.

I think about switching back to crush-compat.

Is there a bug known regarding this?


Please paste your `ceph osd df tree`, `ceph osd pool ls detail`, `ceph osd crush rule dump`.




k
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux