Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

I would not call a ceph page, a random tuning tip. At least I hope they 
are not. NVMe-only with 100Gbit is not really a standard setup. I assume 
with such setup you have the luxury to not notice many optimizations. 

What I mostly read is that changing to mtu 9000 will allow you to better 
saturate the 10Gbit adapter, and I expect this to show on a low end busy 
cluster. Don't you have any test results of such a setup?




-----Original Message-----

Subject: Re:  Re: [External Email] Re: Ceph Nautius not 
working after setting MTU 9000

Don't optimize stuff without benchmarking *before and after*, don't 
apply random tuning tipps from the Internet without benchmarking them.

My experience with Jumbo frames: 3% performance. On a NVMe-only setup 
with 100 Gbit/s network.

Paul


--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Tue, May 26, 2020 at 7:02 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 
wrote:




	Look what I have found!!! :)
	https://ceph.com/geen-categorie/ceph-loves-jumbo-frames/ 
	
	
	
	-----Original Message-----
	From: Anthony D'Atri [mailto:anthony.datri@xxxxxxxxx] 
	Sent: maandag 25 mei 2020 22:12
	To: Marc Roos
	Cc: kdhall; martin.verges; sstkadu; amudhan83; ceph-users; doustar
	Subject: Re:  Re: [External Email] Re: Ceph Nautius not 

	working after setting MTU 9000
	
	Quick and easy depends on your network infrastructure.  Sometimes 
it is 
	difficult or impossible to retrofit a live cluster without 
disruption.   
	
	
	> On May 25, 2020, at 1:03 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 

	wrote:
	> 
	> 
	> I am interested. I am always setting mtu to 9000. To be honest I 
	> cannot imagine there is no optimization since you have less 
interrupt 
	> requests, and you are able x times as much data. Every time there 

	> something written about optimizing the first thing mention is 
changing 
	
	> to the mtu 9000. Because it is quick and easy win.
	> 
	> 
	> 
	> 
	> -----Original Message-----
	> From: Dave Hall [mailto:kdhall@xxxxxxxxxxxxxx]
	> Sent: maandag 25 mei 2020 5:11
	> To: Martin Verges; Suresh Rama
	> Cc: Amudhan P; Khodayar Doustar; ceph-users
	> Subject:  Re: [External Email] Re: Ceph Nautius not 
	> working after setting MTU 9000
	> 
	> All,
	> 
	> Regarding Martin's observations about Jumbo Frames....
	> 
	> I have recently been gathering some notes from various internet 
	> sources regarding Linux network performance, and Linux 
performance in 
	> general, to be applied to a Ceph cluster I manage but also to the 
rest 
	
	> of the Linux server farm I'm responsible for.
	> 
	> In short, enabling Jumbo Frames without also tuning a number of 
other 
	> kernel and NIC attributes will not provide the performance 
increases 
	> we'd like to see.  I have not yet had a chance to go through the 
rest 
	> of the testing I'd like to do, but  I can confirm (via iperf3) 
that 
	> only enabling Jumbo Frames didn't make a significant difference.
	> 
	> Some of the other attributes I'm referring to are incoming and 
	> outgoing buffer sizes at the NIC, IP, and TCP levels, interrupt 
	> coalescing, NIC offload functions that should or shouldn't be 
turned 
	> on, packet queuing disciplines (tc), the best choice of TCP 
slow-start 
	
	> algorithms, and other TCP features and attributes.
	> 
	> The most off-beat item I saw was something about adding IPTABLES 
rules 
	
	> to bypass CONNTRACK table lookups.
	> 
	> In order to do anything meaningful to assess the effect of all of 

	> these settings I'd like to figure out how to set them all via 
Ansible 
	> - so more to learn before I can give opinions.
	> 
	> -->  If anybody has added this type of configuration to Ceph 
Ansible,
	> I'd be glad for some pointers.
	> 
	> I have started to compile a document containing my notes.  It's 
rough, 
	
	> but I'd be glad to share if anybody is interested.
	> 
	> -Dave
	> 
	> Dave Hall
	> Binghamton University
	> 
	>> On 5/24/2020 12:29 PM, Martin Verges wrote:
	>> 
	>> Just save yourself the trouble. You won't have any real benefit 
from
	> MTU
	>> 9000. It has some smallish, but it is not worth the effort, 
problems,
	> and
	>> loss of reliability for most environments.
	>> Try it yourself and do some benchmarks, especially with your 
regular 
	>> workload on the cluster (not the maximum peak performance), then 
drop
	> the
	>> MTU to default ;).
	>> 
	>> Please if anyone has other real world benchmarks showing huge
	> differences
	>> in regular Ceph clusters, please feel free to post it here.
	>> 
	>> --
	>> Martin Verges
	>> Managing director
	>> 
	>> Mobile: +49 174 9335695
	>> E-Mail: martin.verges@xxxxxxxx
	>> Chat: https://t.me/MartinVerges
	>> 
	>> croit GmbH, Freseniusstr. 31h, 81247 Munich
	>> CEO: Martin Verges - VAT-ID: DE310638492 Com. register: 
Amtsgericht 
	>> Munich HRB 231263
	>> 
	>> Web: https://croit.io
	>> YouTube: https://goo.gl/PGE1Bx
	>> 
	>> 
	>>> Am So., 24. Mai 2020 um 15:54 Uhr schrieb Suresh Rama
	>> <sstkadu@xxxxxxxxx>:
	>> 
	>>> Ping with 9000 MTU won't get response as I said and it should 
be
	> 8972. Glad
	>>> it is working but you should know what happened to avoid this 
issue
	> later.
	>>> 
	>>>> On Sun, May 24, 2020, 3:04 AM Amudhan P <amudhan83@xxxxxxxxx> 
	wrote:
	>>> 
	>>>> No, ping with MTU size 9000 didn't work.
	>>>> 
	>>>> On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar
	> <doustar@xxxxxxxxxxxx>
	>>>> wrote:
	>>>> 
	>>>>> Does your ping work or not?
	>>>>> 
	>>>>> 
	>>>>> On Sun, May 24, 2020 at 6:53 AM Amudhan P 
<amudhan83@xxxxxxxxx>
	> wrote:
	>>>>> 
	>>>>>> Yes, I have set setting on the switch side also.
	>>>>>> 
	>>>>>> On Sat 23 May, 2020, 6:47 PM Khodayar Doustar,
	> <doustar@xxxxxxxxxxxx>
	>>>>>> wrote:
	>>>>>> 
	>>>>>>> Problem should be with network. When you change MTU it 
should be
	>>>> changed
	>>>>>>> all over the network, any single hup on your network should 

	>>>>>>> speak
	> and
	>>>>>>> accept 9000 MTU packets. you can check it on your hosts 
with
	>>> "ifconfig"
	>>>>>>> command and there is also equivalent commands for other
	>>>> network/security
	>>>>>>> devices.
	>>>>>>> 
	>>>>>>> If you have just one node which it not correctly configured 
for
	> MTU
	>>>> 9000
	>>>>>>> it wouldn't work.
	>>>>>>> 
	>>>>>>> On Sat, May 23, 2020 at 2:30 PM sinan@xxxxxxxx 
<sinan@xxxxxxxx>
	>>> wrote:
	>>>>>>>> Can the servers/nodes ping eachother using large packet 
sizes? 
	>>>>>>>> I
	>>> guess
	>>>>>>>> not.
	>>>>>>>> 
	>>>>>>>> Sinan Polat
	>>>>>>>> 
	>>>>>>>>> Op 23 mei 2020 om 14:21 heeft Amudhan P 
<amudhan83@xxxxxxxxx>
	> het
	>>>>>>>> volgende geschreven:
	>>>>>>>>> In OSD logs "heartbeat_check: no reply from OSD"
	>>>>>>>>> 
	>>>>>>>>>> On Sat, May 23, 2020 at 5:44 PM Amudhan P
	> <amudhan83@xxxxxxxxx>
	>>>>>>>> wrote:
	>>>>>>>>>> Hi,
	>>>>>>>>>> 
	>>>>>>>>>> I have set Network switch with MTU size 9000 and also in 
my
	>>> netplan
	>>>>>>>>>> configuration.
	>>>>>>>>>> 
	>>>>>>>>>> What else needs to be checked?
	>>>>>>>>>> 
	>>>>>>>>>> 
	>>>>>>>>>>> On Sat, May 23, 2020 at 3:39 PM Wido den Hollander <
	>>> wido@xxxxxxxx
	>>>>>>>> wrote:
	>>>>>>>>>>> 
	>>>>>>>>>>> 
	>>>>>>>>>>>> On 5/23/20 12:02 PM, Amudhan P wrote:
	>>>>>>>>>>>> Hi,
	>>>>>>>>>>>> 
	>>>>>>>>>>>> I am using ceph Nautilus in Ubuntu 18.04 working fine 
wit
	> MTU
	>>>> size
	>>>>>>>> 1500
	>>>>>>>>>>>> (default) recently i tried to update MTU size to 9000.
	>>>>>>>>>>>> After setting Jumbo frame running ceph -s is timing 
out.
	>>>>>>>>>>> Ceph can run just fine with an MTU of 9000. But there 
is
	>>> probably
	>>>>>>>>>>> something else wrong on the network which is causing 
this.
	>>>>>>>>>>> 
	>>>>>>>>>>> Check the Jumbo Frames settings on all the switches as 
well
	> to
	>>>> make
	>>>>>>>> sure
	>>>>>>>>>>> they forward all the packets.
	>>>>>>>>>>> 
	>>>>>>>>>>> This is definitely not a Ceph issue.
	>>>>>>>>>>> 
	>>>>>>>>>>> Wido
	>>>>>>>>>>> 
	>>>>>>>>>>>> regards
	>>>>>>>>>>>> Amudhan P
	>>>>>>>>>>>> _______________________________________________
	>>>>>>>>>>>> ceph-users mailing list -- ceph-users@xxxxxxx To 
	>>>>>>>>>>>> unsubscribe send an email to ceph-users-leave@xxxxxxx
	>>>>>>>>>>>> 
	>>>>>>>>>>> _______________________________________________
	>>>>>>>>>>> ceph-users mailing list -- ceph-users@xxxxxxx To 
unsubscribe 
	
	>>>>>>>>>>> send an email to ceph-users-leave@xxxxxxx
	>>>>>>>>>>> 
	>>>>>>>>> _______________________________________________
	>>>>>>>>> ceph-users mailing list -- ceph-users@xxxxxxx To 
unsubscribe 
	>>>>>>>>> send an email to ceph-users-leave@xxxxxxx
	>>>>>>>> _______________________________________________
	>>>>>>>> ceph-users mailing list -- ceph-users@xxxxxxx To 
unsubscribe 
	>>>>>>>> send an email to ceph-users-leave@xxxxxxx
	>>>>>>>> 
	>>>> _______________________________________________
	>>>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe 
send 
	>>>> an email to ceph-users-leave@xxxxxxx
	>>>> 
	>>> _______________________________________________
	>>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe 
send an 
	
	>>> email to ceph-users-leave@xxxxxxx
	>>> 
	>> _______________________________________________
	>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe 
send an 
	>> email to ceph-users-leave@xxxxxxx
	> _______________________________________________
	> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send 
an 
	> email to ceph-users-leave@xxxxxxx
	> 
	> _______________________________________________
	> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send 
an 
	> email to ceph-users-leave@xxxxxxx
	
	_______________________________________________
	ceph-users mailing list -- ceph-users@xxxxxxx
	To unsubscribe send an email to ceph-users-leave@xxxxxxx
	

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux