Re: Ceph Nautius not working after setting MTU 9000

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just save yourself the trouble. You won't have any real benefit from MTU
9000. It has some smallish, but it is not worth the effort, problems, and
loss of reliability for most environments.
Try it yourself and do some benchmarks, especially with your regular
workload on the cluster (not the maximum peak performance), then drop the
MTU to default ;).

Please if anyone has other real world benchmarks showing huge differences
in regular Ceph clusters, please feel free to post it here.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am So., 24. Mai 2020 um 15:54 Uhr schrieb Suresh Rama <sstkadu@xxxxxxxxx>:

> Ping with 9000 MTU won't get response as I said and it should be 8972. Glad
> it is working but you should know what happened to avoid this issue later.
>
> On Sun, May 24, 2020, 3:04 AM Amudhan P <amudhan83@xxxxxxxxx> wrote:
>
> > No, ping with MTU size 9000 didn't work.
> >
> > On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar <doustar@xxxxxxxxxxxx>
> > wrote:
> >
> > > Does your ping work or not?
> > >
> > >
> > > On Sun, May 24, 2020 at 6:53 AM Amudhan P <amudhan83@xxxxxxxxx> wrote:
> > >
> > >> Yes, I have set setting on the switch side also.
> > >>
> > >> On Sat 23 May, 2020, 6:47 PM Khodayar Doustar, <doustar@xxxxxxxxxxxx>
> > >> wrote:
> > >>
> > >>> Problem should be with network. When you change MTU it should be
> > changed
> > >>> all over the network, any single hup on your network should speak and
> > >>> accept 9000 MTU packets. you can check it on your hosts with
> "ifconfig"
> > >>> command and there is also equivalent commands for other
> > network/security
> > >>> devices.
> > >>>
> > >>> If you have just one node which it not correctly configured for MTU
> > 9000
> > >>> it wouldn't work.
> > >>>
> > >>> On Sat, May 23, 2020 at 2:30 PM sinan@xxxxxxxx <sinan@xxxxxxxx>
> wrote:
> > >>>
> > >>>> Can the servers/nodes ping eachother using large packet sizes? I
> guess
> > >>>> not.
> > >>>>
> > >>>> Sinan Polat
> > >>>>
> > >>>> > Op 23 mei 2020 om 14:21 heeft Amudhan P <amudhan83@xxxxxxxxx> het
> > >>>> volgende geschreven:
> > >>>> >
> > >>>> > In OSD logs "heartbeat_check: no reply from OSD"
> > >>>> >
> > >>>> >> On Sat, May 23, 2020 at 5:44 PM Amudhan P <amudhan83@xxxxxxxxx>
> > >>>> wrote:
> > >>>> >>
> > >>>> >> Hi,
> > >>>> >>
> > >>>> >> I have set Network switch with MTU size 9000 and also in my
> netplan
> > >>>> >> configuration.
> > >>>> >>
> > >>>> >> What else needs to be checked?
> > >>>> >>
> > >>>> >>
> > >>>> >>> On Sat, May 23, 2020 at 3:39 PM Wido den Hollander <
> wido@xxxxxxxx
> > >
> > >>>> wrote:
> > >>>> >>>
> > >>>> >>>
> > >>>> >>>
> > >>>> >>>> On 5/23/20 12:02 PM, Amudhan P wrote:
> > >>>> >>>> Hi,
> > >>>> >>>>
> > >>>> >>>> I am using ceph Nautilus in Ubuntu 18.04 working fine wit MTU
> > size
> > >>>> 1500
> > >>>> >>>> (default) recently i tried to update MTU size to 9000.
> > >>>> >>>> After setting Jumbo frame running ceph -s is timing out.
> > >>>> >>>
> > >>>> >>> Ceph can run just fine with an MTU of 9000. But there is
> probably
> > >>>> >>> something else wrong on the network which is causing this.
> > >>>> >>>
> > >>>> >>> Check the Jumbo Frames settings on all the switches as well to
> > make
> > >>>> sure
> > >>>> >>> they forward all the packets.
> > >>>> >>>
> > >>>> >>> This is definitely not a Ceph issue.
> > >>>> >>>
> > >>>> >>> Wido
> > >>>> >>>
> > >>>> >>>>
> > >>>> >>>> regards
> > >>>> >>>> Amudhan P
> > >>>> >>>> _______________________________________________
> > >>>> >>>> ceph-users mailing list -- ceph-users@xxxxxxx
> > >>>> >>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>>> >>>>
> > >>>> >>> _______________________________________________
> > >>>> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> > >>>> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>>> >>>
> > >>>> >>
> > >>>> > _______________________________________________
> > >>>> > ceph-users mailing list -- ceph-users@xxxxxxx
> > >>>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>>>
> > >>>> _______________________________________________
> > >>>> ceph-users mailing list -- ceph-users@xxxxxxx
> > >>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>>>
> > >>>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux