Re: Question about PG mgr/balancer/crush_compat_metrics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for not making it clear, we are using upmap. Just saw this from the
code and wondering about the usage.

For the OSDs, we do not have any OSD weight < 1.00 until one OSD reaches
the 85% near full ratio. Before I reweight the
OSD, our mgr/balancer/upmap_max_deviation is set to 5 and the PG
distribution is around +/- 5 on each OSD. What's more, I checked the OSD
usage and found the usage varies from about 50% to 70% when the average
usage is 60%, is this distribution OK? We also enabled the compression and
used snappy, will the compression affect the OSD usage?

On Wed, Nov 8, 2023 at 7:24 AM <bryansoong21@xxxxxxxxx> wrote:

> Hello,
>
> We are using a Ceph Pacific (16.2.10) cluster and enabled the balancer
> module, but the usage of some OSDs keeps growing and reached up to
> mon_osd_nearfull_ratio, which we use 85% by default, and we think the
> balancer module should do some balancer work.
>
> So I checked our balancer configuration and found that our
> "crush_compat_metrics" is set to "pgs,objects,bytes", and this three values
> are used in src.pybind.mgr.balancer.module.Module.calc_eval. However, when
> doing the actual balance task, only the first key is used to do the auto
> balance, in src.pybind.mgr.balancer.module.Module.do_crush_compat:
>         metrics = self.get_module_option('crush_compat_metrics').split(',')
>         key = metrics[0] # balancing using the first score metric
>
> My concern is, any reason why we calculate the balancing using the three
> items but only do the balance using the first one?
>
> Thanks.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Thanks & best regards...

Bryan (Longchao Song)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux