Usually that kind of problems could be on many places.
When you set the MTU to 9000, did you test with ping and the "Do not fragment" Flag ?
If there is a device on the path that is not configured (or doesn't support MTU9000) , it will fragment all packets and that could lead to excessive device CPU consumption. I have seen many firewalls to not use JF by default.
ping <IP/HOSTNAME from brick definition> -M do -s 8972
Best Regards,
Strahil Nikolov
When you set the MTU to 9000, did you test with ping and the "Do not fragment" Flag ?
If there is a device on the path that is not configured (or doesn't support MTU9000) , it will fragment all packets and that could lead to excessive device CPU consumption. I have seen many firewalls to not use JF by default.
ping <IP/HOSTNAME from brick definition> -M do -s 8972
Best Regards,
Strahil Nikolov
В петък, 16 септември 2022 г., 22:24:14 ч. Гринуич+3, Gionatan Danti <g.danti@xxxxxxxxxx> написа:
Il 2022-09-16 18:41 dpgluster@xxxxxxxxx ha scritto:
> I have made extensive load tests in the last few days and figured out
> it's definitely a network related issue. I changed from jumbo frames
> (mtu 9000) to default mtu of 1500. With a mtu of 1500 the problem
> doesn't occur. I'm able to bump the io-wait of our gluster storage
> servers to the max possible values of the disks without any error or
> connection loss between the hypervisors or the storage nodes.
>
> As mentioned in multiple gluster best practices it's recommended to
> use jumbo frames in gluster setups for better performance. So I would
> like to use jumbo frames in my datacenter.
>
> What could be the issue here?
I would try with a jumbo frame setting of 4074 (or 4088) bytes.
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
> I have made extensive load tests in the last few days and figured out
> it's definitely a network related issue. I changed from jumbo frames
> (mtu 9000) to default mtu of 1500. With a mtu of 1500 the problem
> doesn't occur. I'm able to bump the io-wait of our gluster storage
> servers to the max possible values of the disks without any error or
> connection loss between the hypervisors or the storage nodes.
>
> As mentioned in multiple gluster best practices it's recommended to
> use jumbo frames in gluster setups for better performance. So I would
> like to use jumbo frames in my datacenter.
>
> What could be the issue here?
I would try with a jumbo frame setting of 4074 (or 4088) bytes.
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users