Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/8/2012 4:42 PM, Stan Hoeppner wrote:
> On 4/7/2012 12:10 PM, Joe Landman wrote:
>> On 04/07/2012 12:50 PM, Peter Grandi wrote:
>>
>>>    * Your storage layer does not seem to deliver parallel
>>>      operations: as the ~100MB/s overall 'ext4' speed and the
>>>      seek graphs show, in effect your 4+2 RAID6 performs in this
>>>      case as if it were a single drive with a single arm.
>>
>> This is what lept out at me.  I retried a very similar test (pulled
>> Icedtea 2.1, compiled it, tarred it, measured untar on our boxen).  I
>> was getting a fairly consistent 4 +/- delta seconds.
> 
> That's an interesting point.  I guess I'd chalked the low throughput up
> to high seeks.
> 
>> 100MB/s on some supposedly fast drives with a RAID card indicates that
>> either the RAID is badly implemented, the RAID layout is suspect, or
>> similar.  He should be getting closer to N(data disks) * BW(single disk)
>> for something "close" to a streaming operation.
> 
> Reading this thread seems to indicate you're onto something Joe:
> http://h30499.www3.hp.com/t5/System-Administration/Extremely-slow-io-on-cciss-raid6/td-p/4214888

The P400 uses the LSISAS1078 chip, PowerPC 500MHz core, "2 hardware
RAID5/6 processors".  Some sequential benchmarks under Windows with
8x750GB SATA drives on an LSI 1078 based card show sequential RAID6
write rates of ~100MB/s.  RAID0 write rate of this card for 8 drives is
350MB/s.  These drives are capable of 50MB/s sustained writes, so the
RAID0 performance isn't far off the hardware max.

It seems the 1078 is simply not that quick with anything but pure
striping.  Hardware RAID10 write performance appears only about 50%
faster than RAID6.  The RAID6 speed is roughly 1/3rd of the RAID0 speed.
 So exporting the individual drives as I previously mentioned and using
mdraid6 should yield at least  a 3x improvement, assuming your CPUs
aren't already loaded down.

Or, as others have mentioned, simply install an MLC SSD and get 10-100x
more random throughput with XFS if you match the agcount to the number
of flash chips in the SSD.  XFS parallelism flexing its muscles once
again.  EXT4 won't improve as much as it will tend to write the flash
chips sequentially.  Newegg currently has two Mushkin 120GB models for
$120 each, both with 4/5 eggs.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100008120+600038484+50001504&QksAutoSuggestion=&ShowDeactivatedMark=False&Configurator=&IsNodeId=1&Subcategory=636&description=&hisInDesc=&Ntk=&CFG=&SpeTabStoreType=&AdvancedSearch=1&srchInDesc=

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux