Re: What is the order of processing a lock request?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ja S wrote:
> --- Christine Caulfield <ccaulfie@xxxxxxxxxx> wrote:
> 
>> Ja S wrote:
>>> --- Christine Caulfield <ccaulfie@xxxxxxxxxx>
>> wrote:
>>>> Ja S wrote:
>>>>> Hi, All:
>>>>>
>>>>>
>>>>> When an application on a cluster node A needs to
>>>>> access a file on a SAN storage, how DLM process
>>>> the
>>>>> lock request? 
>>>>>
>>>>> Should DLM firstly determine whether there
>> already
>>>>> exists a lock resource mapped to the file, by
>>>> doing
>>>>> the following things in the order 1) looking at
>>>> the
>>>>> master lock resources on the node A, 2)
>> searching
>>>> the
>>>>> local copies of lock resources on the node A, 3)
>>>>> searching the lock directory on the node A to
>> find
>>>> out
>>>>> whether a master lock resource assosicated with
>>>> the
>>>>> file exists on another node, 4) sending messages
>>>> to
>>>>> other nodes in the cluster for the location of
>> the
>>>>> master lock resource? 
>>>>>
>>>>> I ask this question because from some online
>>>> articles,
>>>>> it seems that DLM will always search the
>>>> cluster-wide
>>>>> lock directory across the whole cluster first 
>> to
>>>> find
>>>>> the location of the master lock resource. 
>>>>>
>>>>> Can anyone kindly confirm the order of processes
>>>> that
>>>>> DLM does?
>>>>>
>>>> This should be very well documented, as it's
>> common
>>>> amongst DLM
>>>> implementations.
>>>>
>>> I think I may be blind. I have not yet found a
>>> document which describes the sequence of processes
>> in
>>> a precise way. I tried to read the source code but
>> I
>>> gave up due to lack of comments.
>>>
>>>
>>>> If a node needs to lock a resource that it
>> doesn't
>>>> know about then it
>>>> hashes the name to get a directory node ID, than
>>>> asks that node for the
>>>> master node. if there is no master node (the
>>>> resource is not active)
>>>> then the requesting node is made master
>>>>
>>>> if the node does know the master, (other locks on
>>>> the resource exist)
>>>> then it will go straight to that master node.
>>>
>>> Thanks for the description. 
>>>
>>> However, one point is still not clear to me is how
>> a
>>> node can conclude whether it __knows__ the lock
>>> resource or not?
>> A node knows the resource if it has a local copy.
>> It's as simple as that.
>>
> 
> If the node is a human and has a brain, it can
> "immediately" recall that it knows the lock resouce.
> However, for a computer program, it does not "know"
> anything until it search the target in what it has on
> hand.
> 
> Therefore, the point here is the __search__. What
> should the node search and in which order, and how it
> searches?
> 
> If I missed anything, please kindly point out so that
> I can clarify my question as clear as possible.
> 
>

I think you're trying to make this more complicated than it is. As I've
said several times now, a node "knows" a resource if there is a local
lock on it. That's it! It's not more or less difficult than that, really
it isn't! If the node doesn't have a local lock on the resource then it
doesn't "know" it and has to ask the directory node where it is
mastered. (As I'm sure you already know, locks are  known by their lock
ID numbers, so there's no "search" involved there either).

There is no "search" for a lock around the cluster, that's what the
directory node provides. And as I have already said, that is located by
hashing the resource name to yield a node ID.

So, if you like, the "search" you seem to be looking for is simply a
hash of the resource name. But it's not really a search, and it's only
invoked when the node first encounters a resource.

Chrissie

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux