Re: Odd auto-scaler warnings about too few/many PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 26, 2024 at 3:35 AM Torkil Svensgaard <torkil@xxxxxxxx> wrote:
>
> The most weird one:
>
> Pool rbd_ec_data stores 683TB in 4096 pgs -> warn should be 1024
> Pool rbd_internal stores 86TB in 1024 pgs-> warn should be 2048
>
> That makes no sense to me based on the amount of data stored. Is this a
> bug or what am I missing? Ceph version is 17.2.7.

I'm guessing these pools are in different storage classes or have
different crush rules.  They would be on a different set of OSDs, and
so the autobalancer is going to pro-rate them only against pools
sharing the same OSDs, to maintain a similar number of PGs per OSD
across the entire cluster.

The solid state pools are smaller, so they would get more PGs per TB
of capacity.  On my cluster a 2TB SSD has a similar number of PGs as a
12TB HDD, and that is because the goal is the per-OSD ratio and not
the per-TB ratio.

If you follow the manual balancing guides you'd probably end up with a
similar result.  Just remember that different storage classes need to
be looked at separately - otherwise you'll probably have very few PGs
on your solid state OSDs.

Oh, I personally set the autobalancer to on for solid state, and warn
for HDD, since HDD rebalancing takes so much longer.

--
Rich
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux