Re: Create Lock to Eliminate RMW in RAID/456 when writing perfect stripes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 31 2015, Doug Dumitru wrote:

> A new lock for RAID-4/5/6 to minimize read/modify/write operations.
>
> The Problem:  The background thread for raid can wake up
> asynchronously and will sometimes wake up and start processing a write
> before the writing thread has finished updating the stripe cache
> blocks.  If the calling thread write was a long write (longer than
> chunk size), then the background thread will configure a raid write
> operation that is sub-optimal resulting in extra IO operations, slower
> performance, and higher wear on Flash storage.
>
> The Easy Fix:  When the calling thread has a long write, it "locks"
> the stripe number with a semaphore.  When the background thread wakes
> up and starts working on a stripe, it locks the same lock, and then
> immediately releases it.  This way the background thread will wait for
> the write to fully populate the stripe caches before it start to build
> a write request.

The code does something a lot like this already.
When the filesystem starts a write, it calls "blk_start_plug()"
When it finishes it calls "blk_finish_plug()".

md/raid5 detects this plugging and when it gets a bio and attaches it to
a "stripe_head", the stripe_head is queued on a delayed-list which is
not processed until the blk_finish_plug() is called.

So we should already not start processing a request until we have the
whole request.  But this sometimes doesn't work.  Understanding why it
doesn't work and what actually happens would be an important first step
to fixing the problem.

The plugging doesn't guarantee that a request will be delayed - doing
that can too easily lead to deadlocks.  Rather it just discourages early
processing.  If a memory shortage appears and RAID5 could free up some
memory by processing a request earlier, it is better to do that than to
wait indefinitely and possibly deadlock.
It is possible that this safety-valve code is triggering too early in
some cases.

>
> The Really High Performance Fix:  If the application is well enough
> behaved to write complete, perfect stripes contained in a single BIO
> request, then the whole stripe cache logic can be bypassed.  This lets
> you submit the member disk IO operations directly from the calling
> thread.  I have this running in a patch in the field and it works
> well, but the use case is very limited and something probably breaks
> with more "normal" IO patterns.  I have hit 11GB/sec with RAID-5 and
> 8GB/sec with RAID-6 this way with 24 SSDs.

It would certainly be interesting to find a general solution which
allowed full stripes to be submitted without a context switch.  It
should be possible.  There is already code which avoids copying from the
filesystem buffer into the stripe cache.  Detecting complete stripes and
submitting them immediately should be possible.  Combining several
stripe_heads into a whole stripe should be possible in many cases using
the new stripe-batching.


>
> Tweak-ability:  All of these changes can be exposed in /sys to allow
> sysadmins to tune their system possibly enabling or disabling
> features.  Most useful for early code that might have broken use
> cases.  Then again, too many knobs sometimes just increases confusion.
>
> Asking for Feedback:  I am happy to write "all of the above" and
> submit it and work with the group to get it tested etc.  If this
> interests you, please comment on how far you think I should go.  Also,
> if there are any notes on "submission style", how and where to post
> patches, which kernel version to patch/develop against, documentation
> style, sign-off requirements, etc. please point me at them.
>

Patches should be sent to linux-raid@xxxxxxxxxxxxxxx
Documentation/SubmittingPatches should describe all the required style.

I think a really important first step is to make sure you understand how
the current code is supposed to work, and why it fails sometimes.  The
above short notes should get you started in following what should
happens.

I really don't think locks as you describe them would be part of a
solution.  Flag bits in the stripe_heads, different queues of
stripe_heads and different queuing disciplines might be.

Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux