Hello, Baergen
Thanks for your reply. Restart osd in planned, but my version is 15.2.7, so, I may have encountered the problem you said. Could you provide PR to me about optimize this mechanism? Besides that, if I don't want to upgrade version in recently, is a good way that adjust osd_pool_default_read_lease_ratio to lower? For example, 0.4 or 0.2 to reach the user's tolerance time.
Yite Gu
Thanks for your reply. Restart osd in planned, but my version is 15.2.7, so, I may have encountered the problem you said. Could you provide PR to me about optimize this mechanism? Besides that, if I don't want to upgrade version in recently, is a good way that adjust osd_pool_default_read_lease_ratio to lower? For example, 0.4 or 0.2 to reach the user's tolerance time.
Yite Gu
Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx> 于2023年3月10日周五 22:09写道:
Hello,
When you say "osd restart", what sort of restart are you referring to
- planned (e.g. for upgrades or maintenance) or unplanned (OSD
hang/crash, host issue, etc.)? If it's the former, then these
parameters shouldn't matter provided that you're running a recent
enough Ceph with default settings - it's supposed to handle planned
restarts with little I/O wait time. There were some issues with this
mechanism before Octopus 15.2.17 / Pacific 16.2.8 that could cause
planned restarts to wait for the read lease timeout in some
circumstances.
Josh
On Fri, Mar 10, 2023 at 1:31 AM yite gu <yitegu0@xxxxxxxxx> wrote:
>
> Hi all,
> osd_heartbeat_grace = 20 and osd_pool_default_read_lease_ratio = 0.8 by
> default, so, pg will wait 16s when osd restart in the worst case. This wait
> time is too long, client i/o can not be unacceptable. I think adjusting
> the osd_pool_default_read_lease_ratio to lower is a good way. Have any good
> suggestions about reduce pg wait time?
>
> Best Regard
> Yite Gu
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx