Re: Understanding filesystem size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh wait, I got confused, I thought you meant the max_pg_per_osd setting, please ignore my last comment. 😁

Zitat von Anthony D'Atri <anthony.datri@xxxxxxxxx>:

Default is 100, no?.  I have a pr open to double it.

The data pool disables the autoscaler so you would need to either enable it or increase pg_num manually

On Jan 3, 2025, at 11:03 AM, Eugen Block <eblock@xxxxxx> wrote:

I wouldn’t decrease mon_target_pg_per_osd below the default (250), Anthony is usually someone who recommends the opposite and wants to increase the default. So I’m not sure what exactly he’s aiming for… 😉

Zitat von Nicola Mori <mori@xxxxxxxxxx>:

So you suggest to give this command:

 ceph config set global 200

right? If I understood the meaning of this parameter then it is meaningful when automated PG scaling is on, but it is currently off for the data partition:

 # ceph osd pool get wizard_data pg_autoscale_mode
 pg_autoscale_mode: off

So should I proceed anyway? Sorry to bother you but I'm not sure I understood your suggestion and I fear I could make a mistake at this point.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux