Re: [patch 3/3] raid5: relieve lock contention in get_active_stripe()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 3 Sep 2013 15:02:28 +0800 Shaohua Li <shli@xxxxxxxxxx> wrote:

> On Tue, Sep 03, 2013 at 04:08:58PM +1000, NeilBrown wrote:
> > On Wed, 28 Aug 2013 14:39:53 +0800 Shaohua Li <shli@xxxxxxxxxx> wrote:
> > 
> > > On Wed, Aug 28, 2013 at 02:32:52PM +1000, NeilBrown wrote:
> > > > On Tue, 27 Aug 2013 16:53:30 +0800 Shaohua Li <shli@xxxxxxxxxx> wrote:
> > > > 
> > > > > On Tue, Aug 27, 2013 at 01:17:52PM +1000, NeilBrown wrote:
> > > > 
> > > > > 
> > > > > > Then get_active_stripe wouldn't need to worry about device_lock at all and
> > > > > > would only need to get the hash lock for the particular sector.  That should
> > > > > > make it a lot simpler.
> > > > > 
> > > > > did you mean get_active_stripe() doesn't need device_lock for any code path?
> > > > > How could it be safe? device_lock still protects something like handle_list,
> > > > > delayed_list, which release_stripe() will use while a get_active_stripe can run
> > > > > concurrently.
> > > > 
> > > > Yes you will still need device_lock to protect list_del_init(&sh->lru),
> > > > as well as the hash lock.
> > > > Do you need device_lock anywhere else in there?
> > > 
> > > That's what I mean. So I need get both device_lock and hash_lock. To not
> > > deadlock, I need release hash_lock and relock device_lock/hash_lock. Since I
> > > release lock, I need recheck if I can find the stripe in hash again. So the
> > > seqcount locking doesn't simplify things here. I thought the seqlock only fixes
> > > one race. Did I miss anything?
> > 
> > Can you order the locks so that you take the hash_lock first, then the
> > device_lock?  That would be a lot simpler.
> 
> Looks impossible. For example, in handle_active_stripes() we release several
> stripes, we can't take hash_lock first.

"impossible" just takes a little longer :-)

do_release_stripe gets called with only device_lock held.  It gets passed an
(initially) empty list_head too.
If it wants to add the stripe to an inactive list it puts it on the given
list_head instead.

release_stripe(), after calling do_release_stripe() calls some function to
grab the appropriate hash_lock for each stripe in the list_head and add it
to that inactive list.

release_stripe_list() might collect some stripes from from __release_stripe
that need to go on an inactive list.  It arranges for them to be put on the
right list, with the right lock, next time device_lock is dropped.  That
might be in handle_active_stripes()

activate_bit_delay might similarly collect stripes, which are handled the
same way as those collected by release_stripe_list.
etc.

i.e. the hash_locks protect the various inactive lists.  device_lock protects
all the others.  If we need to add something to an inactive list while
holding device_lock we delay until device_lock can be dropped.

>  
> > > I saw your tree only has seqcount_write lock in one place, but there are still
> > > other places which changing quiesce, degraded. I thought we still need lock all
> > > locks like what I did.
> > 
> > Can you be specific?  I thought I had convinced my self that I covered
> > everything that was necessary, but I might have missed something.
> 
> For example, raid5_quiesce() will change quiesce which get_active_stripe() will
> use. So my point is get_active_stripe() still need get device_lock. Appears you
> agree get_active_stripe() need get device_lock. Maybe I confused your
> comments.

raid5_quiesce might reasonably take all of the hash_locks and then the
device_lock - it is expected to be a rare event and can afford to be heavy
handed.
get_active_stripe() should only take device_lock for list_del_init(&sh->lru).

What else have I missed?

Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux