Re: swidth in RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/30/2013 9:09 PM, Dave Chinner wrote:
> On Sun, Jun 30, 2013 at 06:54:31PM -0700, aurfalien wrote:
>>
>> On Jun 30, 2013, at 6:38 PM, Dave Chinner wrote:
>>
>>> On Sun, Jun 30, 2013 at 04:42:06PM -0500, Stan Hoeppner wrote:
>>>> On 6/30/2013 1:43 PM, aurfalien wrote:
>>>>
>>>>> I understand swidth should = #data disks.
>>>>
>>>> No.  "swidth" is a byte value specifying the number of 512 byte blocks
>>>> in the data stripe.
>>>>
>>>> "sw" is #data disks.
>>>>
>>>>> And the docs say for RAID 6 of 8 disks, that means 6.
>>>>>
>>>>> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
>>>>>
>>>>> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
>>>>
>>>> No.  Lets try visual aids.
>>>>
>>>> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
>>>> Separate 24 blues (data) and 8 reds (parity).
>>>>
>>>> Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
>>>> stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
>>>> write, this time rotating two cups (drives) to the right.  Now drop
>>>> blues into 5-2 and reds into 3-4.  You've written your third stripe,
>>>> rotating by two cups (disks) again.
>>>>
>>>> This is pretty much how RAID6 works.  Each time we wrote we dropped 8
>>>> m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
>>>> Every RAID stripe you write will be constructed of 6 blues and 2 reds.
>>>
>>> Right, that's how they are constructed, but not all RAID distributes
>>> parity across different disks in the array. Some are symmetric, some
>>> are asymmetric, some rotate right, some rotate left, and some use
>>> statistical algorithms to give an overall distribution without being
>>> able to predict where a specific parity block might lie within a
>>> stripe...
>>>
>>> And at the other end of the scale, isochronous RAID arrays tend to
>>> have dedicated parity disks so that data read and write behaviour is
>>> deterministic and therefore predictable from a high level....
>>>
>>> So, assuming that a RAID5/6 device has a specific data layout (be it
>>> distributed or fixed) at the filesystem level is just a bad idea. We
>>> simply don't know. Even if we did, the only thing we can optimise is
>>> the thing that is common between all RAID5/6 devices - writing full
>>> stripe widths is the most optimal method of writing to them....
>>
>> Am I interpreting this to say;
>>
>> 16 disks is sw=16 regardless of parity?
> 
> No. I'm just saying that parity layout is irrelevant to the
> filesystem and that all we care about is sw does not include parity
> disks.

So, here's the formula aurfalien, where #disks is the total number of
active disks (excluding spares) in the RAID array.  In the case of

RAID5	sw = (#disks - 1)
RAID6	sw = (#disks - 2)
RAID10  sw = (#disks / 2) [1]

[1] If using the Linux md/RAID10 driver with one of the non-standard
layouts such as n2 or f2, the formula may change.  This is beyond the
scope of this discussion.  Visit the linux-raid mailing list for further
details.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux