Re: [PATCH] md-cluster: Only one thread should request DLM lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 23 2015, Goldwyn Rodrigues wrote:

> On 10/22/2015 09:11 PM, Neil Brown wrote:
>> rgoldwyn@xxxxxxx writes:
>>
>>> From: Goldwyn Rodrigues <rgoldwyn@xxxxxxxx>
>>>
>>> If a DLM lock is in progress, requesting the same DLM lock will
>>> result in -EBUSY. Use a mutex to make sure only one thread requests
>>> for dlm_lock() function at a time.
>>>
>>> This will fix the error -EBUSY returned from DLM's
>>> validate_lock_args().
>>
>> I can see that we only want one thread calling dlm_lock() with a given
>> 'struct dlm_lock_resource' at a time, otherwise nasty things could
>> happen.
>>
>> However if such a race is possible, then aren't there other possibly
>> complications.
>
> This is specific to the duration of dlm_lock() function only and not the 
> entire lifetime of the resource. If one thread has requested dlm_lock() 
> and another thread comes in and calls dlm_lock() on the same resource, 
> we will get -EBUSY on the second one because the lock is already requested.
>
> Our dlm_unlock_sync() call is also a dlm_lock_sync(), and eventually 
> dlm_lock() call, with a NULL lock.
>
>>
>> Suppose two threads try to lock the same resource.
>> Presumably one will try to lock the resource, then the next one (when it
>> gets the mutex) will discover that it already has the resource, but will
>> think it has exclusive access - maybe?
>
> I am not sure if I understand this. DLM locks are supposed to be at the 
> node level as opposed to thread level.

I think this is exactly my point.  I think we need some extra
thread-level locking.
For example suppose some thread calls sendmsg() which takes the token
lock, and then while that is happening metadata_update_start() gets
called.
It will try to take the token lock, but as the node already has the
lock, it will succeed trivially.  Then two threads on the one node both
think they have the lock which will almost certainly lead to confusion.

So we need to hold some mutex the entire time that sendmsg() is running,
and need to hold that same mutex when calling metadata_update_start().
Once we have that, there is not need for the mutex you introduced which
is just held while claiming the lock.

It could be that we can use ->reconfig_mutex for a lot of this.
Certainly we always hold ->reconfig_mutex while performing a metadata
update.
We probably don't want to take it just for ->resync_info_update().

I'm not sure if it would be best to have a per-resource mutex which we
take in dlm_lock_sync() and drop in dlm_unlock_sync(), or if we want the
locking at a higher level.
Probably ->reconfig_mutex is already used where we need higher-level
locking.
So if you change you patch to unlock in dlm_unlock_sync() rather than
at the end of dlm_lock_sync(), then I think it will make sense.

Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux