Re: Poor performance on 10 Gbps SAN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06 Sep 2013, at 4:10 PM, Ed Cashin <ecashin@xxxxxxxxxx> wrote:

> I don't have a lot of experience with the other non-Coraid AoE targets that are out there, but you might check whether one of them that's oriented more toward performance could be useful to you.
> 
> That said, while checking the vblade README for the design goals, I noticed that it advertises a capacity for 16 outstanding commands.  If you want to try some tuning, you could adjust Bufcount in dat.h and then make sure your settings in /proc are sufficient to allow the kernel to buffer 16 writes.  (Read commands are small.)

I ran vblade with -b to increase the buffer count and it improved performance quite a bit, but it's now maxing out the CPU. I found that bufcount above 64 showed little or no improvement. There is however a big difference between using normal IO (dd with conv=fdatasync) and direct IO (dd with {o,i}flag=direct) on the initiator:

Test            MB/s      CPU	 AvgPktSz  Direct MB/s   CPU	 AvgPktSz
Disk Read	538	  95%	 2083	      623	 67%	 4333
Disk Write	443	  97%	 2095	      582	 75%	 4345
Ramdisk Read	655	  97%	 2083	      778	 69%	 4333
Ramdisk Write	424	 100%	 2095	      624	 81%	 4345

AvgPktSz shows the average packet size as measured by nettop. Wireshark confirms that "normal" IO generates 4132-byte packets while direct IO results in 8740-byte packets. I know Q 5.23 of the Coraid Linux FAQ says that AoE devices with an odd number of sectors result in 512-byte IO jobs, but mine have even sector counts. This is probably not the best way to benchmark but when I create a filesystem on top of my AoE device I get awful performance (50 MB/s) so there are obviously alignment issues.

Either way, looking at the CPU usage it's clear that vblade isn't going reach 10 Gb/s.

I also tried other Linux targets:

kvblade: Doesn't compile against kernel 3.x.

ggaoed: About 25% slower than vblade:

Test	 	MB/s	 CPU  Direct MB/s  CPU
Disk Read	 446	 71%	 446	   51%
Disk Write	 355	 63%	 557	   56%
Ramdisk Read	 531	 91%	 627	   67%
Ramdisk Write	 399	 85%	 602	   73%

qaoed: 25 - 50% slower than vblade:

Test	 	MB/s	 CPU  Direct MB/s  CPU
Disk Read	 282	 77%	 473	   73%
Disk Write	 259	 85%	 465	   73%
Ramisk Read	 291	 99%	 521	   69%
Ramdisk Write	 261	 75%	 467	   75%

Unless I'm missing any further tuning options, none of the open source Linux AoE targets seem to be suitable for a 10 Gb/s SAN.

Regards,
Derick
------------------------------------------------------------------------------
How ServiceNow helps IT people transform IT departments:
1. Consolidate legacy IT systems to a single system of record for IT
2. Standardize and globalize service processes across IT
3. Implement zero-touch automation to replace manual, redundant tasks
http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk
_______________________________________________
Aoetools-discuss mailing list
Aoetools-discuss@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/aoetools-discuss




[Index of Archives]     [Linux ARM Kernel]     [Linux SCSI]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux