Re: RBD Exclusive locks overwritten

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2017-12-19 16:56 GMT+01:00 Wido den Hollander <wido@xxxxxxxx>:


On 12/19/2017 04:33 PM, Garuti, Lorenzo wrote:
Hi all,

we are having a very strange behavior with exclusive locks.
We have one image called test inside a pool called app.


The exclusive lock feature is that only one client can write at the same time, so they will exchange the lock when needed, but both clients can still mount/map the image.

Wido

Ok, but if i write a test file (with the same name "dd.out") with dd command on both machines i can write the same file at the same moment. And now what is the correct/latest file?
My question now is, there is a mechanism that prevent mounting the same image on different hosts?

At the moment the only solution i found is to write a wrapper that check if there are some watchers on the image. 

Tanks,
Lorenzo
 

This is the output of rbd status app/test

    rbd image 'test':
    size 120 GB in 30720 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.651bb238e1f29
    format: 2
    features: layering, exclusive-lock
    flags:


when we map the device in one server

    rbd map app/test
    mount /dev/rbd0 /mnt/


a exclusive lock is released:

    There is 1 exclusive lock on this image.
    LockerID Address
    client.554123 auto 1 10.0.128.19:0/3199254828
    <http://10.0.128.19:0/3199254828>


If we try to lock the image we recive:

    rbd lock add app/test lock_test_20171219_1619

    rbd: lock is already held by someone else


but if we try to map the same device on a second server:

    rbd map app/test
    mount /dev/rbd0 /mnt/


the lock is overwritten:

    There is 1 exclusive lock on this image.
    LockerID Address
    client.534166 auto 1 10.0.128.18:0/3554787139
    <http://10.0.128.18:0/3554787139>



This is the output of rbd status app/test:

    Watchers:
    watcher=10.0.128.18:0/3554787139 <http://10.0.128.18:0/3554787139>
    client.534166 cookie=1
    watcher=10.0.128.19:0/3199254828 <http://10.0.128.19:0/3199254828>
    client.554123 cookie=1


Both servers have full read/write access to the same image!
we're doing something wrong?

We are using Ceph jewel 10.0.10 (ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe).

O.S: CentOS Linux release 7.3.1611 (Core) - Kernel Linux fred 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Thanks,
Lorenzo
--
Lorenzo Garuti
CED MaxMara
email: garuti.l@xxxxxxxxxx <mailto:garuti.l@xxxxxxxxxx>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Lorenzo Garuti
CED MaxMara
email: garuti.l@xxxxxxxxxx
tel: 0522 3993772 - 335 8416054
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux