Hi Chris, On Wed, 2014-06-18 at 16:22 +0100, Chris Boot wrote: > Hi folks, > > I'm trying to work out the source of some pretty poor read performance > I'm seeing over iSCSI. I'm doing all my testing using a 16GB rd_mcp > backstore over "crossover" 10G ethernet between my target and initiator > with no switches in between. I'm seeing 780MB/sec sequential writes > (testing with dd) but "only" 520-550MB/sec reads. > > The target is a fairly beefy dual Xeon E5 machine, while the initiator > is an older Xeon 5148. Both are using Solarflare SFN5162F 10G ethernet > cards with kernel 3.14.5 (from Debian). I can easily saturate the link > using iperf, although admittedly CPU usage is high when doing so. I'm > running irqbalance on both ends and MTU is 9000. > > This is a fairly typical run: > > # dd if=/dev/zero of=/dev/sdg bs=1M > dd: writing `/dev/sdg': No space left on device > 16385+0 records in > 16384+0 records out > 17179869184 bytes (17 GB) copied, 21.3697 s, 804 MB/s > # dd if=/dev/sdg of=/dev/null bs=1M > 16384+0 records in > 16384+0 records out > 17179869184 bytes (17 GB) copied, 31.2977 s, 549 MB/s > > Most of the system setting on both sides are fairly vanilla as the > things I've tried so far haven't had very much effect. The only real > changes I have now are: > > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > > It feels like there's a bottleneck somewhere but I can't quite put my > finger on it. All suggestions gratefully received. > I'd recommend bumping the 'default_cmdsn_depth' from 16 to 64 for 10 Gb/sec links, and set the ioscheduler to 'noop' on the initiator side. Also, you'll want to verify using fio with iodepth=32 + direct=1 + numjobs > 1 settings. --nab -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html