Re: A little RAID experiment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/18/2012 1:44 AM, Stefan Ring wrote:
> On Wed, Jul 18, 2012 at 4:18 AM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
>> On 7/17/2012 12:26 AM, Dave Chinner wrote:
>> ...
>>> I bet it's single threaded, which means it is:
>>
>> The data given seems to strongly suggest a single thread.
>>
>>> Which means throughput is limited by IO latency, not bandwidth.
>>> If it takes 10us to do the write(2), issue and process the IO
>>> completion, and it takes 10us for the hardware to do the IO, you're
>>> limited to 50,000 IOPS, or 200MB/s. Given that the best being seen
>>> is around 35MB/s, you're looking at around 10,000 IOPS of 100us
>>> round trip time. At 5MB/s, it's 1200 IOPS or around 800us round
>>> trip.
>>>
>>> That's why you get different performance from the different raid
>>> controllers - some process cache hits a lot faster than others.
>> ...
>>> IOWs, welcome to Understanding RAID Controller Caching Behaviours
>>> 101 :)
>>
>> It would be somewhat interesting to see Stefan's latency and throughput
>> numbers for 4/8/16 threads.  Maybe the sysbench "--num-threads=" option
>> is the ticket.  The docs state this is for testing scheduler
>> performance, and it's not clear whether this actually does threaded IO.
>>  If not, time for a new IO benchmark.
> 
> Yes, it is intentionally single-threaded and round-trip-bound, as that
> is exactly the kind of behavior that XFS chose to display.

You're referring to your original huge-metadata problem?  IIRC your
workload there was a single thread, wasn't it?

> I tested with more threads now. It is initially faster, which only
> serves to hasten the tanking, and the response time goes through the
> roof. I also needed to increase the --file-num. Apparently the
> filesystem (ext3) in this case cannot handle concurrent accesses to
> the same file.

*Gasp*  EXT3?  Not XFS?  Why are posting this thread on XFS?  The two
will likely have (significantly) different behavior.

Also, to make any meaningful comparison, we kinda need to know which
controller was targeted by these 3 runs below. ;)

> 4 threads:
> 
> [   2s] reads: 0.00 MB/s writes: 23.55 MB/s fsyncs: 0.00/s response
> time: 1.171ms (95%)
> [   4s] reads: 0.00 MB/s writes: 24.35 MB/s fsyncs: 0.00/s response
> time: 1.129ms (95%)
> [   6s] reads: 0.00 MB/s writes: 24.55 MB/s fsyncs: 0.00/s response
> time: 1.141ms (95%)
> [   8s] reads: 0.00 MB/s writes: 25.73 MB/s fsyncs: 0.00/s response
> time: 1.088ms (95%)
> [  10s] reads: 0.00 MB/s writes: 6.14 MB/s fsyncs: 0.00/s response
> time: 0.994ms (95%)
> [  12s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 2735.611ms (95%)
> [  14s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3800.107ms (95%)
> [  16s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 4404.397ms (95%)
> [  18s] reads: 0.00 MB/s writes: 0.00 MB/s fsyncs: 0.00/s response
> time: 3153.588ms (95%)
> [  20s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 4769.433ms (95%)
> 
> 
> 8 threads:
> 
> [   2s] reads: 0.00 MB/s writes: 26.99 MB/s fsyncs: 0.00/s response
> time: 2.451ms (95%)
> [   4s] reads: 0.00 MB/s writes: 28.12 MB/s fsyncs: 0.00/s response
> time: 3.153ms (95%)
> [   6s] reads: 0.00 MB/s writes: 25.97 MB/s fsyncs: 0.00/s response
> time: 2.965ms (95%)
> [   8s] reads: 0.00 MB/s writes: 23.23 MB/s fsyncs: 0.00/s response
> time: 2.560ms (95%)
> [  10s] reads: 0.00 MB/s writes: 0.00 MB/s fsyncs: 0.00/s response
> time: 791.041ms (95%)
> [  12s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3458.162ms (95%)
> [  14s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 5519.598ms (95%)
> [  16s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3219.401ms (95%)
> [  18s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 10235.289ms (95%)
> [  20s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3765.007ms (95%)
> 
> 16 threads:
> 
> [   2s] reads: 0.00 MB/s writes: 34.27 MB/s fsyncs: 0.00/s response
> time: 3.899ms (95%)
> [   4s] reads: 0.00 MB/s writes: 28.62 MB/s fsyncs: 0.00/s response
> time: 6.910ms (95%)
> [   6s] reads: 0.00 MB/s writes: 27.94 MB/s fsyncs: 0.00/s response
> time: 6.869ms (95%)
> [   8s] reads: 0.00 MB/s writes: 13.50 MB/s fsyncs: 0.00/s response
> time: 7.594ms (95%)
> [  10s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 2308.573ms (95%)
> [  12s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 4811.016ms (95%)
> [  14s] reads: 0.00 MB/s writes: 0.00 MB/s fsyncs: 0.00/s response
> time: 4635.714ms (95%)
> [  16s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3200.185ms (95%)
> [  18s] reads: 0.00 MB/s writes: 0.03 MB/s fsyncs: 0.00/s response
> time: 9623.207ms (95%)
> [  20s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 8053.211ms (95%)

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux