Hi Eugen,
thanks, sure, below:
pg_num stuck at 1152 and pgp_num stuck at 1024
Regards,
Martin
ceph config set global mon_max_pg_per_osd 400
ceph osd pool create cfs_data 2048 2048 --pg_num_min 2048
pool 'cfs_data' created
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 187 pgp_num 59 autoscale_mode off
last_change 3099 lfor 0/3089/3096 flags hashpspool,bulk stripe_width 0
target_size_ratio 1 application cephfs
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode off last_change
2942 lfor 0/0/123 flags hashpspool stripe_width 0 pg_autoscale_bias 4
pg_num_min 16 recovery_priority 5 application cephfs
pool 3 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 2943 flags
hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 9 'cfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1152 pgp_num 1024 pg_num_target 2048 pgp_num_target 2048
autoscale_mode off last_change 3198 lfor 0/0/3198 flags hashpspool
stripe_width 0 pg_num_min 2048
On 14.12.22 15:10, Eugen Block wrote:
Hi,
are there already existing pools in the cluster? Can you share your
'ceph osd df tree' as well as 'ceph osd pool ls detail'? It sounds like
ceph is trying to stay within the limit of mon_max_pg_per_osd (default
250).
Regards,
Eugen
Zitat von Martin Buss <mbuss7004@xxxxxxxxx>:
Hi,
on quincy, I created a new pool
ceph osd pool create cfs_data 2048 2048
6 hosts 71 osds
autoscaler is off; I find it kind of strange that the pool is created
with pg_num 1152 and pgp_num 1024, mentioning the 2048 as the new
target. I cannot manage to actually make this pool contain 2048 pg_num
and 2048 pgp_num.
What config option am I missing that does not allow me to grow the
pool to 2048? Although I specified pg_num and pgp_num be the same, it
is not.
Please some help and guidance.
Thank you,
Martin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx