RBD exclusive lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all!

I am facing the following issue with ceph RBD: I can pap the image on multiple hosts.

After I map on the first host I can see its lock on the image. After that I was expecting the map to fail on the second node, but actually it didn't. The second node was able to map the image and take over the lock.

How is this possible? What am I missing?

Here are the commands and their results:

First node:

- initially the image is not mapped:

root@compute1:~# rbd status testimg
Watchers: none
root@compute1:~# rbd lock ls testimg
root@compute1:~#

- Map the image on the first node and check the status again:

root@compute1:~# rbd map testimg
/dev/rbd0
root@compute1:~# rbd status testimg
Watchers:
    watcher=10.10.10.52:0/3566317141 client.27430 cookie=18446462598732840961
root@compute1:~# rbd lock ls testimg
There is 1 exclusive lock on this image.
Locker        ID                         Address
client.27430  auto 18446462598732840961  10.10.10.52:0/3566317141
root@compute1:~#

Next I am trying to map on the second node:

- map the image on the second node

root@controller1:~# rbd map testimg
/dev/rbd0
root@controller1:~# rbd status testimg
Watchers:
    watcher=10.10.10.52:0/3566317141 client.27430 cookie=18446462598732840961
    watcher=10.10.10.51:0/2813741573 client.27469 cookie=18446462598732840961
root@controller1:~# rbd lock ls testimg
There is 1 exclusive lock on this image.
Locker        ID                         Address
client.27469  auto 18446462598732840961  10.10.10.51:0/2813741573
root@controller1:~#

root@compute1:~# ceph versions

{
    "mon": {
        "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 4
    },
    "mds": {
        "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 2
    },
    "overall": {
        "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 11
    }
}
root@compute1:~# rbd -v
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
root@compute1:~# uname -r
5.4.0-90-generic

What am I doing wrong?

Thank you,
Laszlo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux