On 6/2/22 20:46, Ramana Venkatesh Raja wrote:
<snip>
We currently have 512 PGs allocated to this pool. The autoscaler suggest
reducing this amount to "32" PGs. This would result in only a fraction
of the OSDs having *all* of the metadata. I can tell you, based on
experience, that is not a good advise (the longer story here [1]). At
least you want to spread out all OMAP data over as many (fast) disks as
possible. So in this case it should advise 256.
Curious, how many PGs do you have in total in all the pools of your
Ceph cluster? What are the other pools (e.g., data pools) and each of
their PG counts?
POOL SIZE TARGET SIZE RATE RAW CAPACITY
RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM
AUTOSCALE
REDACTEDXXXXXXXXXXXX 7724G 3.0 628.7T
0.0360 1.0 512 off
< rbd
REDACTEDXXXXXXXXXXXX 31806M 3.0 628.7T
0.0001 1.0 128 32 off
< rbd pool
REDACTEDXXXXXXXXXXXX 53914G 3.0 628.7T
0.2512 1.0 4096 1024 off
< rbd pool
REDACTEDXXXXXXXXXXXX 5729G 3.0 628.7T
0.0267 1.0 256 off
< rbd pool
REDACTEDXXXXXXXXXXXX 72411G 3.0 628.7T
0.3374 1.0 2048 off
< cephfs data pool
REDACTEDXXXXXXXXXXXX 999.4G 3.0 628.7T
0.0047 1.0 512 32 off
< rbd pool
REDACTEDXXXXXXXXXXXX 355.7k 3.0 628.7T
0.0000 1.0 8 32 off
< librados, used for locking (samba ctdb)
REDACTEDXXXXXXXXXXXX 19 3.0 628.7T
0.0000 1.0 256 32 off
< rbd, test volume
REDACTEDXXXXXXXXXXXX 0 3.0 628.7T
0.0000 1.0 128 32 off
< rbd, to be removed
REDACTEDXXXXXXXXXXXX 3316G 3.0 628.7T
0.0155 1.0 128 off
< rbd
REDACTEDXXXXXXXXXXXX 98.61M 3.0 628.7T
0.0000 1.0 1 off
< device metrics
What version of Ceph are you using?
15.2.16
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx