Re: CLVM/GFS2 distributed locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/30/2011 03:08 PM, Stevo Slavić wrote:
> Hi Digimer and Yvette,
> 
> Thanks for tips! I don't doubt reliability of the technology, just want
> to make sure it is configured well.
> 
> After fencing a node that held a lock on a file on shared storage, lock
> remains, and non-fenced node cannot take over the lock on that file.
> Wondering how can one check which process (from which node if possible)
> is holding a lock on a file on shared storage.
> dlm should have taken care of releasing the lock once node got fenced,
> right?
> 
> Regards,
> Stevo.

After a successful fence call, DLM will clean up any locks held by the
lost node. That's why it's so critical that the fence action succeeded
(ie: test-test-test). If a node doesn't actually die in a fence, but the
cluster thinks it did, and somehow the lost node returns, the lost node
will think it's locks are still valid and modify shared storage, leading
to near-certain data corruption.

It's all perfectly safe, provided you've tested your fencing properly. :)

Yvette,

  You might be right on the 'noatime' implying 'nodiratime'... I add
both out of habit.

-- 
Digimer
E-Mail:              digimer@xxxxxxxxxxx
Freenode handle:     digimer
Papers and Projects: http://alteeve.com
Node Assassin:       http://nodeassassin.org
"omg my singularity battery is dead again.
stupid hawking radiation." - epitron

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux