Re: [RE]Poorer performance with Raid0 then without?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On my fileserver I encountered a problem where adding a third hard drive drove my hardware interrupts (under ANY os) through the roof.  The end result was the performance equivelant of disabling DMA on all my drives.

I have never figured out why.

Just something to check for.

On 12/31/06, James Olson <big_spender12@xxxxxxxxx> wrote:
What is the output of hdparm /dev/hde and hdparm /dev/hdg? Especially of interest is whether multcount, using_dma and readahead are set to on. They can be automatically turned off by the kernel if there are any seek errors, like can happen early in the FC5 boot sequence with raid0. I have a patch to redhat nash to fix that if that is the case.






---------[ Received Mail Content ]----------

Subject : Poorer performance with Raid0 then without?

Date : Sat, 30 Dec 2006 23:07:48 -0800

From : "Listbox" <listbox@xxxxxxxxxxxxxx>

To : <ataraid-list@xxxxxxxxxx>



Hi folks!

I have been trying to use a

RAID bus controller: Silicon Image, Inc. PCI0680 Ultra ATA-133 Host

Controller (rev 02)

to create a RAID0 workspace for MythTV, but it's not working right....



I have two Seagate 160gb drives in a RAID 0 array on an SiI0680 PCI

ATA raid controller. I formatted an XFS filesystem on

/dev/mapper/sil_agbgdgbjfhei2,

and set mythbackend to use it. ( My DVB card is a DviCO

Fusion Gold 3,( Conexant CX23880).)



When I watch live MythTV, I get terrible artifacts on digital TV, and slow

frame rates on analog. When I reset mythbackend to use a ext3

partition on a single drive ( /dev/hdc1 ), performance is acceptable.



The RAID devices are hde and hdg, the/ partition is hdc.... to me this looks

like all my physical drives give comparable throughput. This is what hdparm

reports:



/dev/hda:

Timing cached reads: 1668 MB in 2.00 seconds = 834.10 MB/sec

Timing buffered disk reads: 88 MB in 3.05 seconds = 28.87 MB/sec



/dev/hde:

Timing cached reads: 1680 MB in 2.00 seconds = 838.49 MB/sec

Timing buffered disk reads: 168 MB in 3.03 seconds = 55.46 MB/sec



/dev/hdf:

Timing cached reads: 1688 MB in 2.00 seconds = 843.57 MB/sec

Timing buffered disk reads: 164 MB in 3.00 seconds = 54.59 MB/sec



/dev/hdg:

Timing cached reads: 1696 MB in 2.00 seconds = 846.68 MB/sec

Timing buffered disk reads: 162 MB in 3.03 seconds = 53.45 MB/sec



I also tried upping the PCI latency of the RAID card with

setpci -v -s 01:07.0 latency_timer=B0 # PCI0680 Ultra ATA-133 Host

But this had no effect. The DVB card has a latency of 32(decimal).



So, given the same hardware throughput, I would expect better performance

with the XFS+RAID setup. This is not the case. I went to considerable

trouble to get the RAID and XFS working on my Fedora 5 system, and it's

pretty discouraging to see that it actually

degrades instead of enhances performance.



I just reformatted with ext3, and tried again, and got the same degradation.

It's looking like the RAID is the problem, but I do not,NOT,NOT! want to

un-stripe the disks.



What else to try?

Listbox



_______________________________________________

Ataraid-list mailing list

Ataraid-list@xxxxxxxxxx

https://www.redhat.com/mailman/listinfo/ataraid-list






Free Movies 100's of Free Feature Length Films - Meet Friends, Watch Movies & Win!

_______________________________________________
Ataraid-list mailing list
Ataraid-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ataraid-list


_______________________________________________
Ataraid-list mailing list
Ataraid-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ataraid-list

[Index of Archives]     [Linux RAID]     [Linux Device Mapper]     [Linux IDE]     [Linux SCSI]     [Kernel]     [Linux Books]     [Linux Admin]     [GFS]     [RPM]     [Yosemite Campgrounds]     [AMD 64]

  Powered by Linux