Re: iostat with raid device...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Let me reword previous email...

I tried to change stripe_cache_size as following and tried values
between 16 to 4096
echo 512 > /sys/block/md0/md/stripe_cache_size

But, I'm not seeing too much difference in performance. I'm running on
2.6.27sh kernel.

Any ideas...

Thanks for your help...

On Tue, Apr 12, 2011 at 12:36 PM, Linux Raid Study
<linuxraid.study@xxxxxxxxx> wrote:
> Hello Neil,
>
> For the benchmarking purpose, I've configured array of ~30GB.
> stripe_cache_size is 1024 (so 1M).
>
> BTW, I'm using Windows copy (robocopy) utility to test perf and I
> believe block size it uses is 32kB. But since everything gets written
> thru VFS, I'm not sure how to change stripe_cache_size to get optimal
> performance with this setup...
>
> Thanks.
>
> On Mon, Apr 11, 2011 at 7:51 PM, NeilBrown <neilb@xxxxxxx> wrote:
>> On Mon, 11 Apr 2011 18:57:34 -0700 Linux Raid Study
>> <linuxraid.study@xxxxxxxxx> wrote:
>>
>>> If I use --assume-clean in mdadm, I see performance is 10-15% lower as
>>> compared to the case wherein this option is not specified. When I run
>>> without --assume_clean, I wait until mdadm prints "recovery_done" and
>>> then run IO benchmarks...
>>>
>>> Is perf drop expected?
>>
>> No. ÂAnd I cannot explain it.... unless the array is so tiny that it all fits
>> in the stripe cache (typically about 1Meg).
>>
>> There really should be no difference.
>>
>> NeilBrown
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux