Kay Diederichs wrote: > Chan Chung Hang Christopher schrieb: > >>> "md1 will read from both disk" is not true in general. >>> RAID1 md reads from one disk only; it uses the other one in case the >>> first one fails. No performance gain from multiple copies. >>> >>> >> I beg to differ. I have disks in a raid1 md array and iostat -x 1 will >> show reads coming off both disks. Unless you do not have the multipath >> > > look more carefully - with the current 2.6.18-9.1.22 kernel the bulk of > the data are read from one of the disks > > Hmm...right now I do not have a Centos 5 box handy. Come on you chums who have blasted me before about multipath. Prove him wrong with data please. I can only pull evidence off a Hardy box. >> module loaded, md will read off both disks. Now whether md will read >> equally off both disks, that certainly will not be true in general. >> >>> You can easily see this for yourself by setting up a RAID1 from e.g. >>> sda1 and sdb1 - /proc/mdstat is: >>> >>> Personalities : [raid1] >>> md1 : active raid1 sdb1[1] sda1[0] >>> 104320 blocks [2/2] [UU] >>> >>> and then comparing the output of hdparm -tT : >>> >>> >> ROTFL. >> >> How about using the proper tool (iostat) and generating some disk load >> instead? >> > > hdparm -tT tests one type of disk access, other tools test other > aspects. I gave the hdparm numbers because everyone can reproduce them. > For RAID0 with two disks you do see - using e.g. hdparm - the doubling > of performance from two disks. > If you take the time to read (or do) RAID benchmarks you'll discover > that Linux software RAID1 is about as fast as a single disk (and RAID0 > with two disks is about twice the speed). It's as simple as that. > > I beg to differ again since I did get combined throughput from a md raid1 device. I would have saved them iostat output to disk if I had known they would have some use. Anyway, I have got some numbers in my other post but on an Ubuntu box. >>> To get performance gain in RAID1 mode you need hardware RAID1. >>> >>> >> Bollocks. The only area in which hardware raid has a significant >> performance advantage over software raid is raid5/6 given sufficient >> cache memory and processing power. >> > > We were talking about RAID1; RAID5/6 is a different area. Linux software > RAID1 is a safeguard against disk failure; it's not designed for speed > increase. There is a number of things that could be improved in Linux > software RAID; read performance of RAID1 is one of them - this _is_ why > some hardware RAID1 adapters indeed are faster than software. > Read http://kernelnewbies.org/KernelProjects/Raid1ReadBalancing - since > the 2.6.25 kernel a simple alternating read is implemented, but that > does not take the access pattern into account. > I have not read that yet but that is odd since I have been blasted by others before for doubting md raid1 doing multiple disk reads. BTW, the Hardy box's kernel is 2.6.24-22-generic. I guess I need to try to generate some from an Intrepid box and see if I get better numbers. > So Linux software RAID1 is just mirroring - and it's good at that. > > It has gotten good...no more having to sync from the beginning to end I believe...just like some hardware raid cards. _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos