RE: new bottleneck section in wiki

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




-----Original Message-----
From: Keld Jørn Simonsen [mailto:keld@xxxxxxxx] 
Sent: Wednesday, July 02, 2008 12:51 PM
To: David Lethe
Cc: linux-raid@xxxxxxxxxxxxxxx
Subject: Re: new bottleneck section in wiki

On Wed, Jul 02, 2008 at 12:04:11PM -0500, David Lethe wrote:
> -----Original Message-----
> From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-owner@xxxxxxxxxxxxxxx] On Behalf Of Keld Jørn Simonsen
> Sent: Wednesday, July 02, 2008 10:56 AM
> To: linux-raid@xxxxxxxxxxxxxxx
> Subject: new bottleneck section in wiki
> 
> I should have done something else this afternoon, but anyway, I was
> inspired to write up this text for the wiki. Comments welcome.
 ....

> 
> I would add -
> The PCI (and PCI-X) bus is shared bandwidth, and operates at lowest common denominator.  Put a 33Mhz card in the PCI bus, and not only does everything operate at 33Mhz, but all of the cards compete.  Grossly simplified, if you have a 133Mhz card and a 33Mhz card in the same PCI bus, then that card will operate at 16Mhz. Your motherboard's embedded Ethernet chip and disk controllers are "on" the PCI bus, so even if you have a single PCI controller card, and a multiple-bus motherboard, then it does make a difference what slot you put the controller in.
> 
> If this isn't bad enough, then consider the consequences of arbitration.  All of the PCI devices have to constantly negotiate between themselves to get a chance to compete against all of the other devices attached to other PCI busses to get a chance to talk to the CPU and RAM.  As such, every packet your Ethernet card picks up could temporarily suspend disk I/O if you don't configure things wisely.

Thanks, I added this text, modifed a little. And also I would like to
note that I was inspired by some emailing with you, when writing the
text.

Current motherboards with onboard disk controlles normally do not have
the disk IO connected via the PCI or PCI-E busses, but rather directly via the
southbridge. What are typical transfer rates between the southbridge
and the northbridge? Could this potentionally be a bottleneck?

And also the disk controllers, could these be bottlenecks? They typically
operate at 300 MB/s nominally, per disk channel, and presumably they
then have a connection to the southbridge that is capable of handling
this speed. So for a 4-disk SATA-II controller this would be at least
1200 MB/s or about 10 gigabit. 

best regards
keld
-------------------
It is much more complicated than just saying what the transfer rates are, especially in the world of blocking, arbitration, and unbalanced I/O.  

Everything is a potential bottleneck.  As I am under NDA with most of the controller vendors, then I can not provide specifics, but suffice to say that certain cards with certain chipsets will max out at well under published speeds.  Heck, you could attach solid-state disks with random I/O access time in the nanosecond range and still only get 150MB/sec out of certain controllers, even on a PCIe X 16 bus. 

BTW, there isn't a SATA-II controller in the planet that will deliver 1200 MB/sec with 4 disk drives. 

David



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux