Re: TOO_MANY_PGS after upgrade from Nautilus to Octupus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick,

just disable the autoscaler everywhere (per pool and global). It is completely ignorant to load distribution considerations, IO patterns, object sizes and so on. If you know what you are doing, you will do better with little effort. You might want to take a look why it wants to increase the PG count on some pools. Apart from that, you should always use the full PG capacity that your cluster can afford, it will not only speed up so many things, it will also improve resiliency and all-to-all recovery.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Patrick Vranckx <patrick.vranckx@xxxxxxxxxxxx>
Sent: 08 November 2022 10:54:17
To: ceph-users
Subject:  TOO_MANY_PGS after upgrade from Nautilus to Octupus

Hi,

We are currently upgrading our cluster from Nautilus to Octupus.

After upgrade of the mons and mgrs, we get warnings about the number of PGS.

Which parameter did change during upgrade to explain those new warnings.
Nothing else was changed.

Is it risky to change the pgs/pool as proposed in the warnings ? In
particular, to reduce from 4096 to 64 !!!

Thanks in advance,

Patrick


root@server4 ~]# ceph -s
   cluster:
     id:     ba00c030-382f-4d75-b150-5b17f77e57fe
     health: HEALTH_WARN
             clients are using insecure global_id reclaim
             6 pools have too few placement groups
             9 pools have too many placement groups

   services:
     mon: 3 daemons, quorum server2,server5,server6 (age 66m)
     mgr: server8(active, since 67m), standbys: server4, server1
     osd: 244 osds: 244 up (since 12m), 244 in (since 2w)
     rgw: 2 daemons active (server1, server4)

   task status:

   data:
     pools:   16 pools, 11441 pgs
     objects: 2.02M objects, 5.9 TiB
     usage:   18 TiB used, 982 TiB / 1000 TiB avail
     pgs:     11441 active+clean

   io:
     client:   862 KiB/s rd, 1.4 MiB/s wr, 61 op/s rd, 100 op/s wr

root@server4 ~]# ceph health detail

...

[WRN] POOL_TOO_MANY_PGS: 9 pools have too many placement groups
     Pool default.rgw.buckets.index has 128 placement groups, should have 32
     Pool default.rgw.buckets.data has 4096 placement groups, should have 64
     Pool os_glance has 1024 placement groups, should have 32
...


[root@server4 ~]# ceph config get mon mon_max_pg_per_osd
250


In ceph.conf, we set also:

osd_max_pg_per_osd_hard_ratio = 3




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux