Re: Overlapping Roots - How to Fix?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 
> Helllo,
> 
> I've reviewed some recent posts in this list and also searched Google for
> info about autoscale and overlapping roots.  In what I have found I do not
> see anything that I can understand regarding how to fix the issue -
> probably because I don't deal with Crush on a regular basis.


Checkout the Note in this section:  https://docs.ceph.com/en/reef/rados/operations/placement-groups/#viewing-pg-scaling-recommendations

I added that last year I think it was as a result of how Rook was creating pools.

> 
> From what I read and looking at 'ceph osd crush rule dump', it looks like
> the 8 replicated pools have
> 
>                    "op": "take",
>                    "item": -1,
>                    "item_name": "default"
> 
> whereas the 2 EC pools have
> 
>                    "op": "take",
>                    "item": -2,
>                    "item_name": "default~hdd"
> 
> To be sure, all of my OSDs are identical - HDD with SSD WAL/DB.
> 
> Please advise on how to fix this.

The subtlety that's easy to miss is that when you specify a device class for only *some* pools, the pools/rules that specify a device class effectively act on a "shadow" CRUSH root.  My terminology may be inexact there.

So I think if you adjust your CRUSH rules so that they all specify a device class -- in your case all the same device class -- your problem (and balancer performance perhaps) will improve.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux