Hi all,
we are having a very strange behavior with exclusive locks.
We have one image called test inside a pool called app.
This is the output of rbd status app/test
rbd image 'test':
size 120 GB in 30720 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.651bb238e1f29
format: 2
features: layering, exclusive-lock
flags:
when we map the device in one server
rbd map app/test
mount /dev/rbd0 /mnt/
a exclusive lock is released:
There is 1 exclusive lock on this image.
Locker ID Address
client.554123 auto 1 10.0.128.19:0/3199254828
If we try to lock the image we recive:
rbd lock add app/test lock_test_20171219_1619
rbd: lock is already held by someone else
but if we try to map the same device on a second server:
rbd map app/test
mount /dev/rbd0 /mnt/
the lock is overwritten:
There is 1 exclusive lock on this image.
Locker ID Address
client.534166 auto 1 10.0.128.18:0/3554787139
This is the output of rbd status app/test:
Watchers:
watcher=10.0.128.18:0/3554787139 client.534166 cookie=1
watcher=10.0.128.19:0/3199254828 client.554123 cookie=1
Both servers have full read/write access to the same image!
we're doing something wrong?
We are using Ceph jewel 10.0.10 (ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe).
O.S: CentOS Linux release 7.3.1611 (Core) - Kernel Linux fred 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Thanks,
Lorenzo
-- _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com