Re: Enabling Jumbo Frames on ceph cluser

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm no expert but maybe another test might be iperf and watch your cpu utilization while doing it
 
You can set iperf to run between a couple monitors and OSD servers
Try setting it at 1500 or your switch's stock MTU
then put the servers at 9000 and the switch at 9128 (for packet overhead/management)
 
then run iperf between the servers for both MTU settings
then do the same and increase the streams so as to saturate the network
 
iperf will give your network throughput which is usually 90% of the listed Network speed
Also by using jumbo frames it will reduce cpu /network cycles as more data is pushed out per ethernet frame
 
for our jumbo frame configuration we saw 26 Gb/s with 1 stream and 37 Gb/s with 10 streams
 
we didn't record it for our stock mtu settings
 
Thanks Joe


>>> Sameer Tiwari <stiwari@xxxxxxxxxxxxxx> 8/11/2017 11:21 AM >>>
Hi,

We ran a test with 1500 MTU and 9000MTU on a small ceph test cluster (3mon + 10 hosts with 2 SSD each, one for journal and one for data) and found minimal ~10% perf improvements.

We tested with FIO for 4K, 8K and 64K block sizes, using RBD directly.

Anyone else have any experience with this?

Thanks,
Sameer

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux