A few years ago Dan van der Ster and I were working on two similar
scripts for increasing pgs.
Just have a look at the following link:
https://github.com/cernceph/ceph-scripts/blob/master/tools/split/ceph-gentle-split
___________________________________
Clyso GmbH
Am 18.08.2020 um 11:27 schrieb Stefan Kooman:
On 2020-08-18 11:13, Hans van den Bogert wrote:
I don't think it might lead to more client slow requests if you set it
to 4096 in one step, since there is a cap on how many recovery/backfill
requests there can be per OSD at any given time.
I am not sure though, but I am happy to be proved wrong by the senior
members in this list :)
Not sure if I qualify for senior, but here are my 2 cents ...
I would argue that you do want to do this in one step. Doing this in
multiple steps will trigger data movement every time you change pg_num
(and pgp_num for that matter). Ceph will recalculate a new mapping every
time you change the pg(p)_num for a pool (or by altering CRUSH rules).
osd_recovery_max_active = 1
osd_max_backfills = 1
If your cluster can't handle this than I wonder what a disk / host
failure would trigger.
Some on this list would argue that you also want the following setting
to avoid client IO starvation:
ceph config set osd osd_op_queue_cut_off high
This is already the default in Octopus.
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx