Re: High IO Wait with RAID 1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The card has the latest non RAID firmware loaded on it. The LSI 1068
model comes default from Supermicro with the non RAID firmware. It
only has an ARM processor capable of RAID 0 or 1 if you load the RAID
firmware.

The other system with the onboard ICH controller exhibits the same
symptoms so I think my card is configured correctly. The interesting
part of this is I can kick off a resync on all three RAID volumes and
the system load and io wait is low. The rebuild rate is 85M/s for the
RAID 5 volume and 65M/s for the RAID 1 volumes, which is the max each
individual drive can do.

Ryan

On Fri, Mar 13, 2009 at 1:02 PM, David Lethe <david@xxxxxxxxxxxx> wrote:
> -----Original Message-----
>
> From:  "Ryan Wagoner" <rswagoner@xxxxxxxxx>
> Subj:  Re: High IO Wait with RAID 1
> Date:  Fri Mar 13, 2009 12:45 pm
> Size:  2K
> To:  "Bill Davidsen" <davidsen@xxxxxxx>
> cc:  "Alain Williams" <addw@xxxxxxxxxxxx>; "linux-raid@xxxxxxxxxxxxxxx" <linux-raid@xxxxxxxxxxxxxxx>
>
> Yeah I understand the basics to RAID and the effect cache has on
> performance. It just seems that RAID 1 should offer better write
> performance than a 3 drive RAID 5 array. However I haven't run the
> numbers so I could be wrong.
>
> It could be just that I expect too much from RAID 1. I'm debating
> about reloading the box with RAID 10 across 160GB of the 4 drives
> (160GB and 320GB) and a mirror on the remaining space. In theory this
> should gain me write performance.
>
> Thanks,
> Ryan
>
> On Fri, Mar 13, 2009 at 11:22 AM, Bill Davidsen <davidsen@xxxxxxx> wrote:
>> Ryan Wagoner wrote:
>>>
>>> I'm glad I'm not the only one experiencing the issue. Luckily the
>>> issues on both my systems aren't as bad. I don't have any errors
>>> showing in /var/log/messages on either system. I've been trying to
>>> track down this issue for about a year now. I just recently my the
>>> connection with RAID 1 and mdadm when copying data on the second
>>> system.
>>>
>>> Unfortunately it looks like the fix is to avoid software RAID 1. I
>>> prefer software RAID over hardware RAID on my home systems for the
>>> flexibility it offers, especially since I can easily move the disks
>>> between systems in the case of hardware failure.
>>>
>>> If I can find time to migrate the VMs, which run my web sites and
>>> email to another machine, I'll reinstall the one system utilizing RAID
>>> 1 on the LSI controller. It doesn't support RAID 5 so I'm hoping I can
>>> just pass the remaining disks through.
>>>
>
> FYi - you can potentially get  a big performance penalty when running a LSI raid card in jbod mode.  The impact varies depending on a lot of things ..   Try loading the jbod fimware on the card if it supports this and re run benchmarks
>
> david
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux