Michael Guntsche wrote:
On Mar 1, 2008, at 21:45, Bill Davidsen wrote:
blockdev --setra 65536 <your lv device>
and run the tests again. You are almost certainly going to get the
results you are after.
I will just comment that really large readahead values may cause
significant memory usage and transfer of unused data. My observations
and some posts indicate that very large readahead and/or chunk size
may reduce random access performance. I believe you said you had
512MB RAM, that may be a factor as well.
I did not set such a large read-ahead. I had a look at the md0 device
which had a value of 3072 and set this on the LV device as well.
Performance really improved after this.
Unless you are planning to use this machine mainly for running
benchmarks, I would tune it for your actual load and a bit of worst
case avoidance.
The last part is exactly what I am aiming at right now.
I tried to keep my changes to a bare minimum.
* Change chunk size to 256K
* Align the physical extent of the LVM to it
* Use the same parameters for mkfs.xfs that are choosen autmatically
by mkfs.xfs if called on the md0 device itself.
* Set the read-ahead of the LVM block device to the same value as the
md0 device
* Change the stripe_cache_size to 2048
With these settings applied to my setup here, RAID+XFS and
RAID+LVM+XFS perform nearly identical and that was my goal from the
beginning.
Now I am off to figure out what's happening during the initial rebuild
of the RAID-5 but see my other mail for this.
Once again, thank you all for your valuable input and support.
Thank you for reporting results, hopefully will be useful to some future
seeker of the same info.
--
Bill Davidsen <davidsen@xxxxxxx>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html