Re: exclusive-lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

In this context my first question would also be, how does one wind up with
such a lock contention in the first place?
And how to safely resolve this?

Both of which are not Ceph problems, but those of the client stack being
used or of knowledgeable, 24/7 monitoring and management.

Net-split, split-brain scenarios need to either be resolved by:

1. A human being making the correct decision and thus avoiding two clients
accessing the same image (neither Openstack, ganeti nor Opennebula do
offer out of the box safe automatics split brain resolvers).

or

2. A system like Pacemaker that has all the tools and means to both
identify a split brain scenario correctly and do the right thing by
itself.
Which incidentally could also include doing that blacklist thing as a mild
form of STONITH. ^o^


Regards,

Christian

On Mon, 11 Jul 2016 17:30:16 -0400 Jason Dillaman wrote:

> Unfortunately that is correct -- the exclusive lock automatically
> transitions upon request in order to handle QEMU live migration. There
> is some on-going work to deeply integrate locking support into QEMU
> which would solve this live migration case and librbd could internally
> disable automatic lock transitions. In the meantime, before starting
> your second copy of QEMU, you should issue a "ceph osd blacklist"
> command against the current lock owner.  That will ensure you won't
> have two QEMU processes fighting for the exclusive lock.
> 
> On Sat, Jul 9, 2016 at 12:37 PM, Bob Tucker <bob@xxxxxxxxxxxxx> wrote:
> > Hello all,
> >
> > I have been attempting to use the exclusive-lock rbd volume feature to try
> > to protect against having two QEMUs writing to a volume at the same time.
> > Specifically if one VM appears to fail due to a net-split, and a second copy
> > is started somewhere else.
> >
> > Looking at various mailing list posts and some code patches it looks like
> > this is not possible currently because if a client doesn't have the lock it
> > will request it from the lock holder and the lock holder will always give it
> > up. Therefore the lock will flip back and forth between the clients - which
> > in the case of a regular filesystem (such as xfs) will lead to corruption.
> >
> > Could someone confirm this is the behavior and whether it is possible to
> > protect the volume in this scenario?
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> 
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux