Re: [PATCH] ceph: make osd_request_timeout changable online in debugfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 24, 2018 at 5:27 AM, Dongsheng Yang
<dongsheng.yang@xxxxxxxxxxxx> wrote:
> Default value of osd_request_timeout is 0 means never timeout,
> and we can set this value in rbd mapping with -o "osd_request_timeout=XX".
> But we can't change this value online.

Hi Dongsheng,

Changing just osd_request_timeout won't do anything about outstanding
requests waiting for the acquisition of the exclusive lock.  This is an
rbd problem and should be dealt with in rbd.c.

>
> [Question 1]: Why we need to set osd_request_timeout?
> When we are going to reboot a node which have krbd devices mapped,
> even with the rbdmap.service enabled, we will be blocked
> in shuting down if the ceph cluster is not working.
>
> Especially, If we have three controller nodes which is running as ceph mon,
> but at the same time, there are some k8s pod with krbd devices on this nodes,
> then we can't shut down the last controller node when we want to shutdown
> all nodes, because when we are going to shutdown last controller node, the
> ceph cluster is not reachable.

Why can't rbd images be unmapped in a proper way before the cluster is
shutdown?

>
> [Question 2]: Why don't we use rbd map -o "osd_request_timeout=XX"?
> We don't want to set the osd_request_timeout in rbd device whole lifecycle,
> there would be some problem in networking or cluster recovery to make
> the request timeout. This would make the fs readonly and application down.
>
> [Question 3]: How can this patch solve this problems?
> With this patch, we can map rbd device with default value of osd_reques_timeout,
> means never timeout, then we can solve the problem mentioned Question 2.
>
> At the same time we can set the osd_request_timeout to what we need,
> in system shuting down, for example, we can do this in rbdmap.service.
> then we can make sure we can shutdown or reboot host normally no matter
> ceph cluster is working well or not. This can solve the problem mentioned
> in Question 1.

The plan is to add a new unmap option, so one can do "rbd unmap -o
full-force", as noted in https://tracker.ceph.com/issues/20927.  This
is an old problem but it's been blocked on various issues in libceph.
Most of them look like they will be resolved in 4.18, bringing better
support for "umount -f" in kcephfs.  We should be able to reuse that
work for "rbd unmap -o full-force".

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux