Re: RAID performance - 5x SSD RAID5 - effects of stripe cache sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 8 Mar 2013, Stan Hoeppner wrote:

The default MaxTransmitBufferSize is actually quite low, 16644 bytes if system RAM is >512MB, and 4356 bytes if RAM <512MB. You can get it up to 60KB read and 64KB write by modifying some other reg values. This applies all the way up to Server 2008. But transmit buffers size isn't the problem in this case.

Indeed, found this:

http://blogs.msdn.com/b/openspecification/archive/2009/04/10/smb-maximum-transmit-buffer-size-and-performance-tuning.aspx

It's not clear to me how the transmit buffers interact with reading from drive. If the 60 kilobyte read request comes in, 60 kilobytes (or whatever) is read, sent out, wait, new 60 kilobyte request comes in, needs to be read from drive, sent, wait. If automatic read-ahead isn't done and the blocks read are cached, I can see this going very inefficient very quickly.

Yes, SMB 2.0 was introduced with Vista and 2008. It has higher throughput over high latency links due to pipelining, but this doesn't yield much on a LAN, even Fast Ethernet. W2K/XP default SMB can hit the 25MB/s peak duplex data rate of 100FDX.

Yes, if latency is low, this isn't a problem.

--
Mikael Abrahamsson    email: swmike@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux