=?eucgb2312_cn?q?=BB=D8=B8=B4=3A_PG_scaling_questions?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Each placement group will get split in 4 pieces in-place all at nearly the same time, no empty pgs will be created.

Normally, you only set pg_num, but do not touch pgp_num. Instead, you can set “target_max_misplaced_ratio” (default 5%). Then mgr will increase pgp_num for you. It will increase pgp_num so that some pg get placed into another OSD, until misplaced ratio reached target. Then it wait for some backfilling to finish before increasing pgp_num again. (This behavior seems to be introduced in Nautilus)

So I don’t think you need to worry about full OSDs. “backfillfull ratio” should throttling backfill when OSD is nearly full, which in turn will throttling pgp_num increase.

发件人: Gabriel Tzagkarakis<mailto:gabrieltz@xxxxxxxxx>
发送时间: 2021年8月3日 19:42
收件人: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
主题:  PG scaling questions

hello everyone,

I would like to know how does the autoscale or manual scaling actually
works to prevent my
cluster from running out of disk space.

Let's say i want to scale a pool of 8 PGs each ~400Gb to 32 PGs.

1) does each placement group get split in 4 pieces IN-PLACE all at the same
time ?
2) does autoscaling choose one of the existing random placement groups for
example X.Y and
 creates new empty placement groups and migrates data upon them and then
continues to the next big PG with or without deleting the original PG?
3) something else ?

I am more concerned about the time period when both the
initial/pre-existing PGs and the newly created ones co-exist in the cluster
to prevent full osds. In my case each pg has many small files and deleting
stray pgs takes a long time.

Would it be better if i used something like
ceph osd pool set default.rgw.buckets.data pg_num 32
and then increase pgp_num in increments of 8 assuming one of the original
PGs is affected at a time. But my assumption may be wrong again

I could not find something relevant in the documentation

Thank you
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux