Re: Understanding filesystem size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jan 14, 2025, at 10:51 AM, Nicola Mori <mori@xxxxxxxxxx> wrote:
> 
> Dear Anthony,
> 
> the autoscaler has finished its work, no OOM disaster happened

:D

> , and the current situation is:
> 
> # ceph osd pool autoscale-status
> POOL               SIZE  TARGET SIZE                RATE  RAW CAPACITY RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM AUTOSCALE  BULK
> .mgr             248.4M                              3.0        323.8T 0.0000                                  1.0       1              on    False
> wizard_metadata   1171M                              3.0        323.8T 0.0000                                  4.0      16       16384  on    True
> wizard_data      79648G               1.3333333730697632        323.8T 0.3203                                  1.0    2048              on    True


> The min and max OSD occupancy are now much more similar: 30.34% and 33.66%, which seems reasonable if I correctly understand.

Yes, absolutely.  You could squeeze that tighter if you try hard enough, but IMHO you’re probably fine where you are.

> The available space of the filesystem however is at 227 TiB which is just 70% of the 324 TiB raw space. It is quite improved but still far from the theoretical maximum of 75% for my 6+2 EC data pool. Neglecting the .mgr and metadata pools, I'd really like to recover the missing 5% which is 16 TB (i.e. about half host), is there something more that can be attempted?

Those calculations are a bit subtle, I fear that I have no additional wisdom on a Monday for that gap.

> 
> One more question: on the web UI keep receiving notifications like "PG autoscaler increasing pool 2 PGs from 16 to 16384", once per minute, but no change in PG number is actually happening. From the autoscale-status output posted above I see that for the metadata pool the value for NEW PG_NUM is 16384, but I don't understand what it means.

Use the source, Luke ;)

    def update(self, module: MgrModule, progress: float) -> None:
	desc = 'increasing' if self.pg_num < self.pg_num_target else 'decreasing'
        module.remote('progress', 'update', self.ev_id,
                      ev_msg="PG autoscaler %s pool %d PGs from %d to %d" %
                      (desc, self.pool_id, self.pg_num, self.pg_num_target),
                      ev_progress=progress,
                      refs=[("pool", self.pool_id)])


I think NEW PG_NUM is the target and PG_NUM is the current value, though I wouldn’t expect the metadata pool to want to be that large.  I would expect PG_NUM to be increasing.  What does `ceph osd dump | grep pool` show for pg_num and pgp_num?

> 
> Thank you,
> 
> Nicola
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux