Re: Port Multipliers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've had a slightly bad experience with port multipliers. I have a
PCI-e x1 JMB362 on the host end and a SiI 3726 connected to it. (I
think. It's a 1-5 PM). I have 5 disks connected in raid5 and get some
fairly appalling write speeds, well below what I'd expect even for
raid5 writes. Reads too are fairly slow...

$ dd if=/dev/zero of=./blah bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 47.4814 s, 11.3 MB/s

$ dd if=./bigfile.iso of=/dev/null
8474857+0 records in
8474857+0 records out
4339126272 bytes (4.3 GB) copied, 144.667 s, 30.0 MB/s

Obviously this isn't the most scientific of tests... :-) but it does
show slowness with this particular combination.

I'm tempted to go buy a SiI 3132 based controller and compare the results.

T


2009/9/16 Majed B. <majedb@xxxxxxxxx>:
> Regarding payloads, I've recently bought an EVGA motherboard off
> newegg for $120 and it supports upping the payload to 4096 bytes.
>
> Newegg link: http://www.newegg.com/Product/Product.aspx?Item=N82E16813188035
> Manual guide: http://www.evga.com/support/manuals/files/113-YW-E115.pdf
>
> The motherboard above has 8 SATA ports, built-in VGA (256MB, if you
> care), 1x Gbit LAN, 4x RAM DIMMs and a few more options. I use it for
> my primary array: 8x1TB disks.
>
> ASUS gaming motherboards allow changing the payload as well.
>
> On Wed, Sep 16, 2009 at 4:28 AM, Doug Ledford <dledford@xxxxxxxxxx> wrote:
>> On Sep 15, 2009, at 9:01 PM, Majed B. wrote:
>>>
>>> I think someone mentioned in the mailing list that the Linux kernel
>>> does sort commands before sending them to the disks, so if the disk
>>> tries to sort, and its algorithm isn't that good, the performance
>>> drops and hence disabling them is a good idea. I believe it's also
>>> mentioned in here: http://linux-raid.osdl.org/index.php/Performance
>>
>>
>> It depends on the elevator in use.  And regardless, I have yet to see a
>> raid5 array ever perform better with queueing turned off instead of on.
>>  Although, in many cases, very large queue depths don't help much.  Testing
>> I've done showed that only a 4 to 8 queue depth is sufficient to get 95% or
>> better of the performance benefit of queueing.
>>
>> --
>>
>> Doug Ledford <dledford@xxxxxxxxxx>
>>
>> GPG KeyID: CFBFF194
>> http://people.redhat.com/dledford
>>
>> InfiniBand Specific RPMS
>> http://people.redhat.com/dledford/Infiniband
>>
>>
>>
>>
>>
>
>
>
> --
>       Majed B.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux