Re: Poor iSCSI read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick,

Sorry about the late reply, I'm juggling multiple projects at the moment
and haven't had a chance to look at this again until now.

On 18/06/14 18:38, Nicholas A. Bellinger wrote:
> Hi Chris,
> 
> On Wed, 2014-06-18 at 16:22 +0100, Chris Boot wrote:
>> Hi folks,
>>
>> I'm trying to work out the source of some pretty poor read performance
>> I'm seeing over iSCSI. I'm doing all my testing using a 16GB rd_mcp
>> backstore over "crossover" 10G ethernet between my target and initiator
>> with no switches in between. I'm seeing 780MB/sec sequential writes
>> (testing with dd) but "only" 520-550MB/sec reads.
>>
>> The target is a fairly beefy dual Xeon E5 machine, while the initiator
>> is an older Xeon 5148. Both are using Solarflare SFN5162F 10G ethernet
>> cards with kernel 3.14.5 (from Debian). I can easily saturate the link
>> using iperf, although admittedly CPU usage is high when doing so. I'm
>> running irqbalance on both ends and MTU is 9000.
>>
>> This is a fairly typical run:
>>
>> # dd if=/dev/zero of=/dev/sdg bs=1M
>> dd: writing `/dev/sdg': No space left on device
>> 16385+0 records in
>> 16384+0 records out
>> 17179869184 bytes (17 GB) copied, 21.3697 s, 804 MB/s
>> # dd if=/dev/sdg of=/dev/null bs=1M
>> 16384+0 records in
>> 16384+0 records out
>> 17179869184 bytes (17 GB) copied, 31.2977 s, 549 MB/s
>>
>> Most of the system setting on both sides are fairly vanilla as the
>> things I've tried so far haven't had very much effect. The only real
>> changes I have now are:
>>
>> net.core.rmem_max = 16777216
>> net.core.wmem_max = 16777216
>>
>> It feels like there's a bottleneck somewhere but I can't quite put my
>> finger on it. All suggestions gratefully received.
>>
> 
> I'd recommend bumping the 'default_cmdsn_depth' from 16 to 64 for 10
> Gb/sec links, and set the ioscheduler to 'noop' on the initiator side.

It appears that the default_cmdsn_depth is already set to 64 on this
target, and the cmdsn_depth on all the ACLs is set to 64 as well.
/sys/block/sdX/device/queue_depth on the initiator is set to 128.

So far I have been testing with the deadline scheduler. Switching to
noop makes little difference.

> Also, you'll want to verify using fio with iodepth=32 + direct=1 +
> numjobs > 1 settings.

At this point I'm trying to get the best possible streaming IO rate out
of the target, so dd seemed like a fairly simple test to do just that.

Using the following fio config:

[file1]
ioengine=libaio
buffered=0
rw=read
bs=128k
size=8g
direct=1
iodepth=32
filename=/dev/sdg
numjobs=2

I get the following stats, which is a little better but still not as
much as I expect:

   READ: io=16384MB, aggrb=643940KB/s, minb=321970KB/s, maxb=323909KB/s,
mint=25898msec, maxt=26054msec

It may well be that my initiator is underpowered to push that much data
around, but top and friends show a fair amount of idle CPU time during
the test. The target doesn't bat an eyelid, as expected.

Thanks,
Chris

-- 
Chris Boot
Tiger Computing Ltd
"Linux for Business"

Tel: 01600 483 484
Web: http://www.tiger-computing.co.uk
Follow us on Facebook: http://www.facebook.com/TigerComputing

Registered in England. Company number: 3389961
Registered address: Wyastone Business Park,
 Wyastone Leys, Monmouth, NP25 3SR
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux