Re: new bottleneck section in wiki

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 2, 2008 at 2:03 PM, Matt Garman <matthew.garman@xxxxxxxxx> wrote:
> On Wed, Jul 02, 2008 at 12:04:11PM -0500, David Lethe wrote:
>> The PCI (and PCI-X) bus is shared bandwidth, and operates at
>> lowest common denominator.  Put a 33Mhz card in the PCI bus, and
>> not only does everything operate at 33Mhz, but all of the cards
>> compete.  Grossly simplified, if you have a 133Mhz card and a
>> 33Mhz card in the same PCI bus, then that card will operate at
>> 16Mhz. Your motherboard's embedded Ethernet chip and disk
>> controllers are "on" the PCI bus, so even if you have a single PCI
>> controller card, and a multiple-bus motherboard, then it does make
>> a difference what slot you put the controller in.
>
> Is that true for all PCI-X implementations?  What's the point, then,
> of having PCI-X (64 bit/66 MHz or greater) if you have even one PCI
> card (32 bit/33 MHz)?

This motherboard (EPoX MF570SLI) uses PCI-E.
It has a plain old PCI video card in it:
Trident Microsystems TGUI 9660/938x/968x
and yet I appear to be able to sustain plenty of disk bandwidth to 4 drives:
(dd if=/dev/sd[b,c,d,e] of=/dev/null bs=64k)
vmstat 1 reports:
290000 to 310000 "blocks in", hovering around 300000.

4x70 would be more like 280, 4x75 is 300. Clearly the system is not
bandwidth challenged.
(This is with 4500 context switches/second, BTW.)

-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux