Re: makefs alignment issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/26/2014 06:43 PM, Dave Chinner wrote:
> On Sat, Oct 25, 2014 at 12:35:17PM -0500, Stan Hoeppner wrote:
>> If the same interface is used for Linux logical block devices (md, dm,
>> lvm, etc) and hardware RAID, I have a hunch it may be better to
>> determine that, if possible, before doing anything with these values.
>> As you said previously, and I agree 100%, a lot of RAID vendors don't
>> export meaningful information here.  In this specific case, I think the
>> RAID engineers are exporting a value, 1 MB, that works best for their
>> cache management, or some other path in their firmware.  They're
>> concerned with host interface xfer into the controller, not the IOs on
>> the back end to the disks.  They don't see this as an end-to-end deal.
>> In fact, I'd guess most of these folks see their device as performing
>> magic, and it doesn't matter what comes in or goes out either end.
>> "We'll take care of it."
> 
> Deja vu. This is an isochronous RAID array you are having trouble
> with, isn't it?

I don't believe so.  I'm pretty sure the parity rotates; i.e. standard
RAID5/6.

> FWIW, do your problems go away when you make you hardware LUN width
> a multiple of the cache segment size?

Hadn't tried it.  And I don't have the opportunity now as my contract
has ended.  However the problems we were having weren't related to
controller issues but excessive seeking.  I mentioned this in that
(rather lengthy) previous reply.

>> optimal_io_size.  I'm guessing this has different meaning for different
>> folks.  You say optimal_io_size is the same as RAID width.  Apply that
>> to this case:
>>
>> hardware RAID 60 LUN, 4 arrays
>> 16+2 RAID6, 256 KB stripe unit, 4096 KB stripe width
>> 16 MB LUN stripe width
>> optimal_io_size = 16 MB
>>
>> Is that an appropriate value for optimal_io_size even if this is the
>> RAID width?  I'm not saying it isn't.  I don't know.  I don't know what
>> other layers of the Linux and RAID firmware stacks are affected by this,
>> nor how they're affected.
> 
> yup, i'd expect minimum = 4MB (i.e stripe unit 4MB so we align to
> the underlying RAID6 luns) and optimal = 16MB for the stripe width
> (and so with swalloc we align to the first lun in the RAID0).

At minimum 4MB how does that affect journal writes which will be much
smaller, especially with a large file streaming workload, for which this
setup is appropriate?  Isn't the minimum a hard setting?  I.e. we can
never do an IO less than 4MB?  Do other layers of the stack use this
variable?  Are they expecting values this large?

> This should be passed up unchanged through the stack if none of the
> software layers are doing other geometry modifications (e.g. more
> raid, thinp, etc).

I agree, if RAID vendors all did the right thing...

Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux