Re: How to change the pg numbers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stefan,

I agree with you about the crush rule,  but I truely met the problem for
the cluster,

I set the values large for a quick recover:

osd_recovery_max_active 16

osd_max_backfills 32

Is it a very bad setting?


Kern

On 18/8/2020 下午5:27, Stefan Kooman wrote:
On 2020-08-18 11:13, Hans van den Bogert wrote:
I don't think it might lead to more client slow requests if you set it
to 4096 in one step, since there is a cap on how many recovery/backfill
requests there can be per OSD at any given time.

I am not sure though, but I am happy to be proved wrong by the senior
members in this list :)
Not sure if I qualify for senior, but here are my 2 cents ...

I would argue that you do want to do this in one step. Doing this in
multiple steps will trigger data movement every time you change pg_num
(and pgp_num for that matter). Ceph will recalculate a new mapping every
time you change the pg(p)_num for a pool (or by altering  CRUSH rules).

osd_recovery_max_active = 1
osd_max_backfills = 1

If your cluster can't handle this than I wonder what a disk / host
failure would trigger.

Some on this list would argue that you also want the following setting
to avoid client IO starvation:

ceph config set osd osd_op_queue_cut_off high

This is already the default in Octopus.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux