Re: Lower than expected iSCSI performance compared to CIFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2013-08-26 at 22:14 -0600, Scott Hallowell wrote:
> Nicholas,
> 
> >
> > To confirm, when you enable buffered FILEIO, your able to reach
> > comparable results with Samba, right..?
> >
> 
> I got much closer to my Samba results, yes.
> 
> > If your able to switch backends and reach 1 Gb/sec performance, that
> > would tend to indicate that it's something specific to the backend, and
> > not an iscsi fabric specific issue.
> >
> >> The NAS I am comparing against, which is performing surprisingly well,
> >> is also set up to use iblock.
> >
> > Please share what NAS and the version of LIO that it's using for
> > comparison.  (eg: cat /sys/kernel/config/target/version)
> >
> 
> The Commerical NAS I have to compare against is a Synology DS1511+.
> It has a hardware configuration that is quite close the the system I
> am working on.  The version string:
> 
> Target Engine Core ConfigFS Infrastructure v3.4.0 on Linux/x86_64 on 3.2.30
> 

It's my understanding that Synology is caching all of their writes to
the backend, which explains the difference between different MD raid
backends.

> The version I am running:
> 
> Target Engine Core ConfigFS Infrastructure v4.1.0-rc-m1 on
> Linux/x86_64 on 3.2.0-4-amd64
> 
> > It depends on a number of things.  One is the physical queue depth for
> > each of the drives in the software raid.  Typical low end HBAs only
> > support queue_depth=1, which certainly has an effect on performance.
> > This value is located at /sys/class/scsi_device/$HCTL/device_queue_depth
> >
> > Another factor can be if the individual drives in the raid have the
> > underlying WriteCacheEnable bit set.  This can be checked with 'sdparm
> > --get=WCE /dev/SDX', and set with 'sdparm --set=WCE /dev/sdX'.
> >
> > Also, you'll want to understand the implications of using this, which is
> > that in case of a power failure there is no assurance the data in the
> > individual drive's cache has been written out to disk.
> >
> 
> I confirmed WCE is set on all disks in the raid array.  I also turned
> on NCQ, setting queue_depth to 31.  It made a small improvement (about
> 5%).
> 
> > Since you've already eliminated a different default_cmdsn_depth value,
> > it's likely not going to be a iscsi-target issue.  It's most likely an
> > issue of one of the software RAID configurations being faster for non
> > buffered IO.
> >
> 
> The best test I have to look at this would be to do a dd direct to the
> volume, to the raid array, and directly to the disks and compare those
> values to the copy from the windows system.  I'll give that a shot
> tomorrow.
> 

dd is a pretty crummy test.  I'd recommend using fio instead.

--nab

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux