Re: Understanding filesystem size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is it:

# ceph osd dump | grep pool
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 191543 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 150.00 pool 2 'wizard_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 254279 lfor 0/8092/8090 flags hashpspool,bulk stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 7.86 pool 3 'wizard_data' erasure profile k6_m2_host size 8 min_size 7 crush_rule 1 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode on last_change 266071 lfor 0/0/265366 flags hashpspool,ec_overwrites,bulk stripe_width 24576 application cephfs


According to the documentation:

NEW PG_NUM (if present) is the value that the system recommends that the pg_num of the pool should be. It is always a power of two, and it is present only if the recommended value varies from the current value by more than the default factor of 3

So it's not the target, but just a recommendation if I correctly understand. I find it quite strange that 16384 PGs are recommended for a pool hosting ~ 1 GB of data.

Nicola

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux