Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/1/2014 2:55 PM, Chris Murphy wrote:
> 
> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
> wrote:
> 
>> On 1/31/2014 12:35 AM, Chris Murphy wrote:
>>> Hopefully this is an acceptable way to avoid thread jacking, by 
>>> renaming the  subject…
>>> 
>>> On Jan 30, 2014, at 10:58 PM, Stan Hoeppner
>>> <stan@xxxxxxxxxxxxxxxxx> wrote:
>>>> 
>>>> RAID60 is a nested RAID level just like RAID10 and RAID50.  It
>>>> is a stripe, or RAID0, across multiple primary array types,
>>>> RAID6 in this case.  The stripe width of each 'inner' RAID6
>>>> becomes the stripe unit of the 'outer' RAID0 array:
>>>> 
>>>> RAID6 geometry	 128KB * 12 = 1536KB RAID0 geometry  1536KB * 3
>>>> = 4608KB
>>> 
>>> My question is on this particular point. If this were hardware
>>> raid6, but I wanted to then stripe using md raid0, using the
>>> numbers above would I choose a raid0 chunk size of 1536KB? How
>>> critical is this value for, e.g. only large streaming read/write
>>> workloads? If it were smaller, say 256KB or even 32KB, would
>>> there be a significant performance consequence?
>> 
>> You say 'if it were smaller...256/32KB'.  What is "it"
>> referencing?
> 
> it = chunk size for md raid0.
> 
> So chunk size 128KB * 12 disks, hardware raid6. Chunk size 32KB [1]
> striping the raid6's with md raid0.

Frankly, I don't know whether you're pulling my chain, or really don't
understand the concept of nested striping.  I'll assume the latter.

When nesting stripes, the chunk size of the outer stripe is -always-
equal to the stripe width of each inner striped array, as I clearly
demonstrated earlier:

3 RAID6 arrays
RAID6  geometry	 128KB * 12 = 1536KB
RAID60 geometry 1536KB *  3 = 4608KB

mdadm allows you enough rope to hang yourself in this situation because
it doesn't know the geometry of the underlying hardware arrays, and has
no code to do sanity checking even if it did.  Thus it can't save you
from yourself.

RAID HBA and SAN controller firmware simply won't allow this.  They
configure the RAID60 chunk size automatically equal to the RAID6 stripe
width.  If some vendor's firmware allows one to manually enter the
RAID60 chunk size with a value different from the RAID6 stripe width,
stay away from that vendor.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux