Re: Performance issue with O_DIRECT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2015-09-18 at 16:44 +0200, Dragan Milivojević wrote:
> > To clarify, MC/S means a single session (eg: each $HOST:X:X:X) has more
> > than a single TCP connection.
> >
> > The above with open/iscsi is SC/S (single conn per session) mode.
> 
> Ok, thanks for the clarification. I had no use for it before so I
> haven't researched it properly.
> 
> 
> >> iodepth 1, ramdisk, 128k block size, bandwith 63MB/s, client iostat avgrq-sz 256
> >> iodepth 1, ramdisk, 256k block size, bandwith 80MB/s, client iostat avgrq-sz 512
> >> iodepth 1, ramdisk, 512k block size, bandwith 88MB/s, client iostat avgrq-sz 1024
> >> iodepth 1, ramdisk, 1024k block size, bandwith 91MB/s, client iostat avgrq-sz 2048
> >> iodepth 1, ramdisk, 4096k block size, bandwith 93MB/s, client iostat avgrq-sz 8192
> >> iodepth 1, ramdisk, 8192k block size, bandwith 114MB/s, client iostat avgrq-sz 8192
> >> iodepth 1, ramdisk, 16384k block size, bandwith 116MB/s, client iostat avgrq-sz 8192
> >>
> >
> > So in order to reach 1 Gb/sec port saturation, you'll need to push
> > iodepth > 1 @64k blocksize, or utilize a larger blocksize at iodepth=1.
> 
> Workload is generated at windows client and I have no way of changing
> the block size or iodepth. One of the reason why I'm exploring this issue.
> 
> > This is about what I'd expected for 1 Gb/sec @ 1500 MTU btw.
> 
> Unfortunately I have used jumbo frames, iperf3 output (TCP MSS: 8948 (default))
> iperf_tests.txt lines 7 & 37.
> 
> 

...

> >
> >> Hard drive as backstore:
> >>
> >> iodepth 1, block, 64k block size, bandwith 50MB/s, client iostat avgrq-sz 128, server iostat avgrq-sz 128
> >> iodepth 2, block, 64k block size, bandwith 94MB/s, client iostat avgrq-sz 128, server iostat avgrq-sz 128
> >> iodepth 3, block, 64k block size, bandwith 113MB/s, client iostat avgrq-sz 128, server iostat avgrq-sz 128
> >> iodepth 4, block, 64k block size, bandwith 118MB/s, client iostat avgrq-sz 128, server iostat avgrq-sz 128
> >>
> >
> > This looks about as I'd expected as well.
> >
> >> Changing the block size has some effect but unfortunately bandwidth
> >> maxes at 70MB/s even when block size was set at 32768K.   At this
> >> setting avgrq-sz on server was 1024 and on the client 8192
> >>
> >
> > Using a > 1500 byte MTU may help get you closer to 1 Gb/sec saturation
> > at iodepth=1.
> 
> MTU was set at 9000 so we are back to where we started?
> 
> One interesting observation (at least to me): the performance
> difference of almost 30% between ramdisk and block
> backstores. First thing that comes to mind is the latency of hard
> drive vs ramdisk but I have seen similar results
> when using a lv on a 8 disk raid 6 that is capable of > 500MB/s
> sequential. At these low speeds and with sequential
> workload it seems odd. I haven't actually done any math to confirm
> this, it's just a hunch.
> 
> 

For 1 Gb/sec w/ iodepth=1, you'll not be able to saturate the link with
iodepth=1 + blocksize=64.

This is expected behavior.

> > I'm able to reproduce with v4.3-rc1 code.
> >
> > Note this bug is not specific to MC/S operation, and appears to be a
> > regression specific to MSFT iSCSI initiators.
> >
> > Still debugging this, but should have a bug-fix soon.
> >
> > Thanks alot for reporting this.
> 
> Thank you, hopefully those fixes will be included in the upcoming RHEL
> 7.2 so that would be an easy
> solution for my problems.
> 

RHEL 7.2 has already been released with a v4.0.y kernel, so the MC/S
part should just work when you configure MaxConnections > 1.

> I will try to run a couple of test today using Centos 7.1 as server
> and client on a completely different
> network and hardware. I hope to eliminate any possibility that the
> local setup used is the cause of this issue.

However, MC/S is not going to have any effect on iodepth=1 workloads.

--nab

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux