XioMessenger (RDMA) Performance results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Im happy to share test results we run in the lab with Matt's latest XioMessenger code which implements Ceph messaging over Accelio RDMA library 
Results look pretty encouraging, demonstrating a * 20x * performance boost

Below is a table comparing XioMessenger (RDMA) with SimpleMessanger (TCP) and various interconnects (56G InfiniBand and 40GbE/RoCE)
Note that we tested with CRC on/off, in RDMA there is no need for software CRC, its done by the hardware  
Tests below use a single communication thread, using more threads would produce higher performance (on 64KB IO the Link/PCIe is saturated with RDMA using a single thread)   

Matt has some more small IO optimizations in the pipe, and we hope to share perf results using librados soon, I assume they will be disk bound 

protocol	CRC	msg depth	IO size	Msg/sec	bandwidth (MB/s)	CPU % server	CPU% client
eth	crc	50	4K	          16,262 	64	100%	100%
eth	no_crc	50	4K	          15,637 	61	100%	100%
eth	crc	50	64K	            5,960 	373	93%	100%
eth	no_crc	50	64K	            7,678 	480	93%	100%
ipoib	no_crc	50	4K	          16,003 	63	100%	100%
ipoib	no_crc	50	64K	            7,375 	461	93%	100%
							
IB	no_crc	50	4K	        334,088 	1305	98%	98%
IB	no_crc	50	64K	          95,078 	5942	98%	98%
							
roce	no_crc	50	4K	        332,388 	1298	95%	100%
roce	no_crc	50	64K	          69,445 	4340	91%	87%
							
roce	crc	50	4K	        172,756 	675	97%	100%
roce	crc	50	64K	          19,657 	1229	100%	48%

Regards, Yaron



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux