Re: What is the order of processing a lock request?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--- Christine Caulfield <ccaulfie@xxxxxxxxxx> wrote:

> Ja S wrote:
> > --- Christine Caulfield <ccaulfie@xxxxxxxxxx>
> wrote:
> > 
> >> Ja S wrote:
> >>> --- Christine Caulfield <ccaulfie@xxxxxxxxxx>
> >> wrote:
> >>>> Ja S wrote:
> >>>>> Hi, All:
> >>>>>
> >>>>>
> >>>>> When an application on a cluster node A needs
> to
> >>>>> access a file on a SAN storage, how DLM
> process
> >>>> the
> >>>>> lock request? 
> >>>>>
> >>>>> Should DLM firstly determine whether there
> >> already
> >>>>> exists a lock resource mapped to the file, by
> >>>> doing
> >>>>> the following things in the order 1) looking
> at
> >>>> the
> >>>>> master lock resources on the node A, 2)
> >> searching
> >>>> the
> >>>>> local copies of lock resources on the node A,
> 3)
> >>>>> searching the lock directory on the node A to
> >> find
> >>>> out
> >>>>> whether a master lock resource assosicated
> with
> >>>> the
> >>>>> file exists on another node, 4) sending
> messages
> >>>> to
> >>>>> other nodes in the cluster for the location of
> >> the
> >>>>> master lock resource? 
> >>>>>
> >>>>> I ask this question because from some online
> >>>> articles,
> >>>>> it seems that DLM will always search the
> >>>> cluster-wide
> >>>>> lock directory across the whole cluster first 
> >> to
> >>>> find
> >>>>> the location of the master lock resource. 
> >>>>>
> >>>>> Can anyone kindly confirm the order of
> processes
> >>>> that
> >>>>> DLM does?
> >>>>>
> >>>> This should be very well documented, as it's
> >> common
> >>>> amongst DLM
> >>>> implementations.
> >>>>
> >>> I think I may be blind. I have not yet found a
> >>> document which describes the sequence of
> processes
> >> in
> >>> a precise way. I tried to read the source code
> but
> >> I
> >>> gave up due to lack of comments.
> >>>
> >>>
> >>>> If a node needs to lock a resource that it
> >> doesn't
> >>>> know about then it
> >>>> hashes the name to get a directory node ID,
> than
> >>>> asks that node for the
> >>>> master node. if there is no master node (the
> >>>> resource is not active)
> >>>> then the requesting node is made master
> >>>>
> >>>> if the node does know the master, (other locks
> on
> >>>> the resource exist)
> >>>> then it will go straight to that master node.
> >>>
> >>> Thanks for the description. 
> >>>
> >>> However, one point is still not clear to me is
> how
> >> a
> >>> node can conclude whether it __knows__ the lock
> >>> resource or not?
> >> A node knows the resource if it has a local copy.
> >> It's as simple as that.
> >>
> > 
> > If the node is a human and has a brain, it can
> > "immediately" recall that it knows the lock
> resouce.
> > However, for a computer program, it does not
> "know"
> > anything until it search the target in what it has
> on
> > hand.
> > 
> > Therefore, the point here is the __search__. What
> > should the node search and in which order, and how
> it
> > searches?
> > 
> > If I missed anything, please kindly point out so
> that
> > I can clarify my question as clear as possible.
> > 
> >
> 
> I think you're trying to make this more complicated
> than it is. 



Maybe, :-), Just want to know what exact happened.



> As I've
> said several times now, a node "knows" a resource if
> there is a local
> lock on it. That's it! It's not more or less
> difficult than that, really
> it isn't! 

At the same time, there could be 30K local locks on a
node in our system. How are these local locks stored
or mapped, in a hash table, or a big but sparse array?
>From the source code, I guess the local locks are
stored in a list. Correct me if I am wrong since I
really have not yet studied the code very carefully.


> If the node doesn't have a local lock on
> the resource then it
> doesn't "know" it and has to ask the directory node
> where it is
> mastered. 

Does it mean even if the node owns the master lock
resource but it doesn't have a local lock associated
with the master lock resource, it still needs to ask
the directory node?



> (As I'm sure you already know, locks are 
> known by their lock
> ID numbers, so there's no "search" involved there
> either).

True. When a request on a file has been issued, the
inode number of file (in hex) will be used to make up
the name of lock resource (the second number of the
name). 

It is true that the node has the list of lock
resources (either local copy or master copy) as long
as it has local locks. However, the node can just like
a teacher, who has a list of students and the students
are known by their names or student IDs. When the
teacher want to fill up the final grade for each
student, he still needs to look at the form and search
for the student name and put the grade beside the
name. The search can be done according to the student
ID if the form is sorted by the student ID or by the
student surname if the form is sorted by the surname.
Either way, the teacher still needs to __search__.
Same thing should be applied to the node. The node may
use a smart way to search the lock resources kept in
the list, possibly a hash function (but I doubt there
is a very good hash function which can find the
location of the target lock resource immediately). 

Am I still wrong?

> 
> There is no "search" for a lock around the cluster,
> that's what the
> directory node provides. And as I have already said,
> that is located by
> hashing the resource name to yield a node ID.

Yes, yes, I think I didn't say it clearly. The lock
resource is located by hashing the resource name to
yield a node ID. But before hashing, the node still
needs to perform the search within the list or
whatever data strucute that keeps the local locks on
itself to find out whether the target lock resource is
already in use or "known". Isn't it? I am sorry it
seems that I am so stubborn.

Thanks for your patient. You are a really good helper.

Jas

> So, if you like, the "search" you seem to be looking
> for is simply a
> hash of the resource name. But it's not really a
> search, and it's only
> invoked when the node first encounters a resource.
> 
> Chrissie
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
>
https://www.redhat.com/mailman/listinfo/linux-cluster
> 



      

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux