Re: tcmu-runner: "Acquired exclusive lock" every 21s

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/05/2019 05:58 AM, Matthias Leopold wrote:
> Hi,
> 
> I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12
> before I dare to put it into production. I installed latest tcmu-runner
> release (1.5.1) and (like before) I'm seeing that both nodes switch
> exclusive locks for the disk images every 21 seconds. tcmu-runner logs
> look like this:
> 
> 2019-08-05 12:53:04.184 13742 [WARN] tcmu_notify_lock_lost:222
> rbd/iscsi.test03: Async lock drop. Old state 1
> 2019-08-05 12:53:04.714 13742 [WARN] tcmu_rbd_lock:762 rbd/iscsi.test03:
> Acquired exclusive lock.
> 2019-08-05 12:53:25.186 13742 [WARN] tcmu_notify_lock_lost:222
> rbd/iscsi.test03: Async lock drop. Old state 1
> 2019-08-05 12:53:25.773 13742 [WARN] tcmu_rbd_lock:762 rbd/iscsi.test03:
> Acquired exclusive lock.
> 
> Old state can sometimes be 0 or 2.
> Is this expected behaviour?

What initiator OS are you using?

It happens if you have 2 or more initiators accessing the same image but
it should not happen normally. It occurs when one initiator cannot
access the image's primary gateway and it is using the secondary, and
the other initiators are accessing the image via the primary gateway.
The lock then bounces between the gws as the initiators access the image
via both.

You could also hit it if somehow you mapped the image to multiple LUNs
and so the initiator thinks LUN0 and LUN10 are difference images with
different primary gws.

> 
> What may be of interest in my case is that I use a dedicated
> cluster_client_name in iscsi-gateway.cfg (not client.admin) and that I'm
> running 2 separate targets in different IP networks.
>

Your network setup might not be correct on one initiator node and so it
has dropped down to the secondary gw.

On the initiator OS check that all initiators are accessing the primary
(active optimized) path.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux