On Tue, Aug 11, 2009 at 8:41 PM, Jens Axboe<jens.axboe@xxxxxxxxxx> wrote: > On Tue, Aug 11 2009, Bart Van Assche wrote: >> On Tue, Aug 11, 2009 at 7:14 PM, Jens Axboe<jens.axboe@xxxxxxxxxx> wrote: >> > Did you profile this? Where did it burn all the CPU time on the >> > initiator side? >> >> The test I ran involved a Linux SRP initiator and a Linux SRP target >> (SCST) using a RAM disk as backstorage. Read throughput is about 1700 >> MB/s for block sizes of 8 MB and above. But with a block size of 4 KB, >> the read throughput on the initiator drops to 100 MB/s. At this block >> size there are about 50.000 interrupts per second generated by the >> InfiniBand HCA in the initiator system. On the same setup the >> ib_send_bw tool reports a throughput of 1850 MB/s for a block size of >> 4 KB. This last tool is not interrupt driven but uses polling. > > OK, so that looks promising at least. Which hw driver does it use? If I > look under infiniband/, I see nes, amso, ehca, various ipath and mthca. > That's where it needs to be hooked up, the srp above mostly looks like > library helpers and the target hook to the scsi layer. The above numbers have been obtained on Mellanox ConnectX hardware. This hardware is controlled by the mlx4_core and mlx4_ib kernel modules. Source code for these drivers can be found in drivers/infiniband/hw/mlx4 and drivers/net/mlx4. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html