Re: RAID performance - 5x SSD RAID5 - effects of stripe cache sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/7/2013 11:57 PM, Mikael Abrahamsson wrote:
> On Thu, 7 Mar 2013, Stan Hoeppner wrote:
> 
>> I think you missed the point.  SMB with TS<->DC is ~10MB/s, but should
>> be more like 100MB/s.  Run the FTP client on TS against the FTP
>> service on the DC.  Get and put files from/to the 300GB NTFS volume
>> that is shared.  If FTP is significantly faster then we know SMB is
>> the problem, or something related to SMB, not TCP.
> 
> Don't know if it's obvious to everybody, so if you already know the
> internals of SMB, you can stop reading:
> 
> Older versions of SMB uses a 60 kilobyte block for transferring files.
> This works by requesting a block, waiting for that block to be
> delivered, then requesting the next one. Those who remember Xmodem will
> know what I'm talking about.

The default MaxTransmitBufferSize is actually quite low, 16644 bytes if
system RAM is >512MB, and 4356 bytes if RAM <512MB.  You can get it up
to 60KB read and 64KB write by modifying some other reg values.  This
applies all the way up to Server 2008.  But transmit buffers size isn't
the problem in this case.

> So if there is latency introduced somewhere, SMB performance
> deteriorates severely, to the point where if there is 30 ms delay, one
> can't really get more than 1 megabyte/s transfer speed, even if there is
> a 10GE pipe between the involved computers.

It's very unlikely that he's hitting latency over the wire.  GbE latency
is ~250uS (0.25ms).  We know from others' published experience that the
8111 series Realteks can be good for up to 90MB/s with fast CPUs, but
that others have trouble getting 25MB/s from them.  They could be part
of the problem here due to drivers, virtualization, etc.  I'm sure we'll
be looking at this.  Recall I recommended some time ago that Adam should
perform end-to-end netcat testing on all of his hosts' NIC ports to get
a baseline of TCP performance.

> Latencies can be introduced by trying to read for HDDs as well, so...
> this might be worthwile to look at.

The problem could be one of any number of things, or a combination.
It's too early to tell without testing.  And the FTP test won't be the
last.  Hunting down Windows server performance problems is bad enough,
but once virtualized it gets worse.

> I don't know exactly when the better versions of SMB/CIFS were
> introduced, but I believe it happened in Vista / Windows server 2008.

Yes, SMB 2.0 was introduced with Vista and 2008.  It has higher
throughput over high latency links due to pipelining, but this doesn't
yield much on a LAN, even Fast Ethernet.  W2K/XP default SMB can hit the
25MB/s peak duplex data rate of 100FDX.

-- 
Stan

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux