Re: Lock Resources

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--- Christine Caulfield <ccaulfie@xxxxxxxxxx> wrote:


> > DLM lockspace 'data'
> >        5         2f06768 1
> >        5          114d15 1
> >        5          120b13 1
> >        5         5bd1f04 1
> >        3          6a02f8 2
> >        5          cb7604 1
> >        5          ca187b 1
> > 
> 
> The first two numbers are the lock name. Don't ask
> me what they mean,
> that's a GFS question! (actually, I think inode
> numbers might be
> involved) The last number is the nodeID on which the
> lock is mastered.


Great, thanks again!


> >> That lookup only happens the first time
> >> a resource is used by a node, once the
> >> node knows where the master is, 
> >> it does not need to look it up again,
> >> unless it releases all
> >> locks on the resource.
> >>
> > 
> > Oh, I see. Just to further clarify, does it means
> if
> > the same lock resource is required again by an
> > application on the node A, the node A will go
> straight
> > to the known node (ie the node B) which holds the
> > master previously, but needs to lookup again if
> the
> > node B has already released the lock resource?
> 
> Not quite. A resource is mastered on a node for as
> long as there are
> locks for it. If node A gets the lock (which is
> mastered on node B) then
> it knows always to go do node B until all locks on
> node A are released.
> When that happens the local copy of the resource on
> node A is released
> including the reference to node B. If all the locks
> on node B are
> released (but A still has some) then the resource
> will stay mastered on
> node B and nodes that still have locks on that
> resource will know where
> to find it without a directory lookup.
> 

Aha, I think I missed another important concept -- a
local copy of lock resources. I did not realise the
existence of the local copy of lock resources. Which
file should I check to figure out how many local
copies a node has and what the local copies are? 

Many thanks again, you have been very helpful.

Jas


      ____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux