Re: Balancer: Unable to find further optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-11-27 17:53, Anthony D'Atri wrote:

Hi,

My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from about 50 up to 100 PG's per OSD. This is far from balanced.

Do you have multiple CRUSH roots or device classes? Are all OSDs the same weight?

Yes, I have 2 CRUSH roots and 2 device classes. 1 CRUSH root is using the ssd device class, another CRUSH root is using the hdd device class. I am only talking about the HDD's.

My disk sizes differs from 1.6T up to 2.4T.

Ah. The number of PG replicas as reported by `ceph osd df` should be proportional to the OSD capacities. With some of your OSDs double the size of others, it is natural that some will have double the number of PGs.

That said, 2.4T is an unusual size for a drive. What specific storage drives are you using? Chances are that you’d benefit from bumping pg_num on at least some of your pools.

Sorry, I was wrong. It is 1.6T and 2.2T (not 2.4T).
The 2.2T OSD's are ranging between 65-104 PG's, while the 1.6T OSD's are ranging between 47-80 PG's.

Bumping the PG's per pool might balance it a bit, but without bumping the PG's the balancer module should balance it as well, right?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux