-----Original Message----- From: jnelson@xxxxxxxxxxx [mailto:jnelson@xxxxxxxxxxx] On Behalf Of Jon Nelson Sent: Wednesday, July 02, 2008 2:38 PM To: Keld J?rn Simonsen Cc: Matt Garman; David Lethe; linux-raid@xxxxxxxxxxxxxxx Subject: Re: new bottleneck section in wiki >> This motherboard (EPoX MF570SLI) uses PCI-E. > > PCI-E is quite different architecturally from PCI-X. > >> It has a plain old PCI video card in it: >> Trident Microsystems TGUI 9660/938x/968x >> and yet I appear to be able to sustain plenty of disk bandwidth to 4 drives: >> (dd if=/dev/sd[b,c,d,e] of=/dev/null bs=64k) >> vmstat 1 reports: >> 290000 to 310000 "blocks in", hovering around 300000. >> >> 4x70 would be more like 280, 4x75 is 300. Clearly the system is not >> bandwidth challenged. >> (This is with 4500 context switches/second, BTW.) > > Possibly you are using an on-board disk controller, and then it most > likely does not use the PCI-E bus for disk IO. I only point it out to show how this setup scales. If there were bottlenecks in the chipset, they'd have shown up in the test. -- Jon =========================== Not true, Jon. The test above was limited by the amount of data that a single drive head can push to the controller with 100% sequential reads of 64KB. You can't say anything about chipset bottlenecks, because you haven't created a condition where the chipset could even be a bottleneck. I/O is constrained by the disk drives. Now, if you attached industrial class SSDs that operate at media speeds with access time in the NS range, then you could be in a position to benchmark chipset performance. David -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html