Balancing PGs across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

in this <https://ceph.io/community/the-first-telemetry-results-are-in/>
blog post I find this statement:
"So, in our ideal world so far (assuming equal size OSDs), every OSD now
has the same number of PGs assigned."

My issue is that accross all pools the number of PGs per OSD is not equal.
And I conclude that this is causing very unbalanced data placement.
As a matter of fact the data stored on my 1.6TB HDD in specific pool
"hdb_backup" is in a range starting with
osd.228 size: 1.6 usage: 52.61 reweight: 1.00000
and ending with
osd.145 size: 1.6 usage: 81.11 reweight: 1.00000

This impacts the amount of data that can be stored in the cluster heavily.

Ceph balancer is enabled, but this is not solving this issue.
root@ld3955:~# ceph balancer status
{
    "active": true,
    "plans": [],
    "mode": "upmap"
}

Therefore I would ask you for suggestions how to work on this unbalanced
data distribution.

I have attached pastebin for
- ceph osd df sorted by usage <https://pastebin.com/QLQHjA9g>
- ceph osd df tree <https://pastebin.com/SvhP2hp5>

My cluster has multiple crush roots respresenting different disks.
In addition I have defined multiple pools, one pool for each disk type:
hdd, ssd, nvme.

THX
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux