# ceph osd pool autoscale-statusPOOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 246.1M 3.0 323.8T 0.0000 1.0 1 on False wizard_metadata 1176M 3.0 323.8T 0.0000 4.0 16 16384 on True wizard_data 80443G 1.3333333730697632 323.8T 0.3235 1.0 2048 on True
So it seems that the data pool is increasing the number of PGs from 512 to 2048 (currently there are 711 PG in total for the three pools).
I'll report back after the backfill operations finish. Nicola
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx