Re: Balancing PGs across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You have way too few PGs in one of the roots. Many OSDs have so few
PGs that you should see a lot of health warnings because of it.
The other root has a factor 5 difference in disk size which isn't ideal either.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Nov 18, 2019 at 3:03 PM Thomas Schneider <74cmonty@xxxxxxxxx> wrote:
>
> Hi,
>
> in this <https://ceph.io/community/the-first-telemetry-results-are-in/>
> blog post I find this statement:
> "So, in our ideal world so far (assuming equal size OSDs), every OSD now
> has the same number of PGs assigned."
>
> My issue is that accross all pools the number of PGs per OSD is not equal.
> And I conclude that this is causing very unbalanced data placement.
> As a matter of fact the data stored on my 1.6TB HDD in specific pool
> "hdb_backup" is in a range starting with
> osd.228 size: 1.6 usage: 52.61 reweight: 1.00000
> and ending with
> osd.145 size: 1.6 usage: 81.11 reweight: 1.00000
>
> This impacts the amount of data that can be stored in the cluster heavily.
>
> Ceph balancer is enabled, but this is not solving this issue.
> root@ld3955:~# ceph balancer status
> {
>     "active": true,
>     "plans": [],
>     "mode": "upmap"
> }
>
> Therefore I would ask you for suggestions how to work on this unbalanced
> data distribution.
>
> I have attached pastebin for
> - ceph osd df sorted by usage <https://pastebin.com/QLQHjA9g>
> - ceph osd df tree <https://pastebin.com/SvhP2hp5>
>
> My cluster has multiple crush roots respresenting different disks.
> In addition I have defined multiple pools, one pool for each disk type:
> hdd, ssd, nvme.
>
> THX
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux