Lower than expected iSCSI performance compared to CIFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have been looking into a performance concern with the iSCSI target
as compared to CIFS running on the same server.  The expectation was
that iSCSI should perform somewhat similarly to Samba.  The test
environments are Window 7 & Windows 2008 Initiators connecting to a
target running on a Debian wheezy release (a 3.2.46 kernel).  The test
is a file copy from Windows to the Linux server.  The source volume is
a software Raid 0 running on Windows.  The destination is an iSCSI LUN
on a software Raid-5 array with 4 disks (2tb WD Reds).

The write (from Windows to the CIFS/iSCSI volumes) is considerably
slower (about half the rate) than the CIFS write.

Specifically, I see 90+ MB/s writes with Samba on both the Windows 7
and WIndows 2008 machines (using robocopy and 5.7GB of data spread
unevenly across about 30 files).

Performing the same tests with iSCSI and what I believe to be the
2.0.8 version of the Windows iSCSI initiator, I am getting closer to
40-45MB/s. on Windows 7 and 65 MB/s on Windows 2008.

To test the theory the issue was a Windows issue, I connected the
Windows 7 initiator to a commercial SAN and repeated the same tests.
I got results of around 87MB/s.  The commercial SAN was configured
similarly to my Linux server - Raid 5, 4 2tb WD Red disks, and has
similar hardware (intel Atom processor, e1000e NICs, although less
physical ram: 1GB vs 2GB).

The results are fairly repeatable (+/- a couple of MB/s) and, at least
with Windows 7, do not appear to suggest a specific issue with the
Windows side of the equation.  The CIFS performance would suggest (to
me, at least) that there is not a basic networking problem, either.

I've tried a number of different things in an attempt to affect the
iSCSI performance:  changing the disk scheduling (CFQ, Deadline, and
noop), confirming write caching is on with hdparm, tweaking vm
parameters in the kernel, tweaking TCP and adapter parameters (both in
Linux and Windows), etc.  Interestingly, the performance numbers do
not seem to change by more than +/- 10%, aggregate, with enough
variability in the results that I'd suggest the changes are
essentially in the noise.  I will note that I have not gone to
9000byte MTUs, but that seems irrelevant as the commercial SAN I
compared against wasn't using that, either.

I attempted to look at wireshark traces to identify any obvious
patterns that might be had from the traffic.  Unfortunately, the
amount of data required before I was able to start seeing repeatable
differences in the aggregate rates (>400MB of file transfers) combined
with offloading and the significant amount of caching in Windows has
made such an analysis a bit tricky.

It seems to me that there is something mis-configured in a very basic
way which is significantly limits the performance by a far more
significant extreme than can be explained by simple tuning, but I am
at a loss to understand what it is.

I am hoping that this has a ring of familiarity with someone who can
give me some pointers on where I need to focus my attention.

Thanks,

Scott Hallowell
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux