Re: Understanding filesystem size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been too impatient: after some minutes the autoscaler kicked in and now the situation is the following:

# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 246.1M 3.0 323.8T 0.0000 1.0 1 on False wizard_metadata 1176M 3.0 323.8T 0.0000 4.0 16 16384 on True wizard_data 80443G 1.3333333730697632 323.8T 0.3235 1.0 2048 on True

So it seems that the data pool is increasing the number of PGs from 512 to 2048 (currently there are 711 PG in total for the three pools).

I'll report back after the backfill operations finish.

Nicola

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux