I'm playing around with performance tuning my disk access, but there is something limiting my bandwidth to the disks. I was hoping you could help me determine what. My setup is a sil3132 controller, connected to a PMP with five disks behind it. I'm using iostat to measure disk traffic and I'm using reads via dd for testing. When I'm accessing a single disk, the bandwidth is 70 - 80 MiB/s. When I access a second disk, the bandwidth is about 50 MiB/s/disk, and all five results in 25 MiB/s/disk. In other words, something is limiting things to about 100 MiB/s. Now the question is what that limiting factor is. The PCIe bus can sustain 250 MiB/s, so even with overhead that should be plenty. The SATA links are in theory 300 MiB/s, so that can't be it either. The remaining factors are the controller and the multiplier chip, and/or the way we access them. Tejun, what kind of throughput have you seen when you have been testing the sil3132 and multipliers? Rgds -- -- Pierre Ossman Linux kernel, MMC maintainer http://www.kernel.org rdesktop, core developer http://www.rdesktop.org WARNING: This correspondence is being monitored by the Swedish government. Make sure your server uses encryption for SMTP traffic and consider using PGP for end-to-end encryption.
Attachment:
signature.asc
Description: PGP signature