Re: stripe_cache_size and performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, 25 Jun 2007, Jon Nelson wrote:

On Thu, 21 Jun 2007, Jon Nelson wrote:

On Thu, 21 Jun 2007, Raz wrote:

What is your raid configuration ?
Please note that the stripe_cache_size is acting as a bottle neck in some
cases.

Well, that's kind of the point of my email. I'll try to restate things,
as my question appears to have gotten lost.

1. I have a 3x component raid5, ~314G per component. Each component
happens to be the 4th partition of a 320G SATA drive. Each drive
can sustain approx. 70MB/s reads/writes. Except for the first
drive, none of the other partitions are used for anything else at this
time. The system is nominally quiescent during these tests.

2. The kernel is 2.6.18.8-0.3-default on x86_64 (openSUSE 10.2).

3. My best sustained write performance comes with a stripe_cache_size of
4096. Larger than that seems to reduce performance, although only very
slightly.

4. At values below 4096, the absolute write performance is less than the
best, but only marginally.

5. HOWEVER, at any value *above* 512 the 'check' performance is REALLY
BAD. By 'check' performance I mean the value displayed by /proc/mdstat
after I issue:

echo check > /sys/block/md0/md/sync_action

When I say "REALLY BAD" I mean < 3MB/s.

6. Here is a short incomplete table of stripe_cache_size to 'check'
performance:

384.... 72-73MB/s
512.... 72-73MB/s
640.... 73-74MB/s
768.....3-3.4MB/s

And the performance stays "bad" as I increase the stripe_cache_size.

7. And now, the question: the best absolute 'write' performance comes
with a stripe_cache_size value of 4096 (for my setup). However, any
value of stripe_cache_size above 384 really, really hurts 'check' (and
rebuild, one can assume) performance.  Why?

--
Jon Nelson <jnelson-linux-raid@xxxxxxxxxxx>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Neil has a patch for the bad speed.

In the mean time, do this (or better to set it to 30, for instance):

# Set minimum and maximum raid rebuild speed to 60MB/s.
echo "Setting minimum and maximum resync speed to 60 MiB/s..."
echo 60000 > /sys/block/md0/md/sync_speed_min
echo 60000 > /sys/block/md0/md/sync_speed_max
echo 60000 > /sys/block/md1/md/sync_speed_min
echo 60000 > /sys/block/md1/md/sync_speed_max
echo 60000 > /sys/block/md2/md/sync_speed_min
echo 60000 > /sys/block/md2/md/sync_speed_max
echo 60000 > /sys/block/md3/md/sync_speed_min
echo 60000 > /sys/block/md3/md/sync_speed_max

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux