stress testing 40Gbps linux bridge with Mpps - is HFSC a bottleneck?

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have three multiCPU servers connected in a row. ServerA --- serverB --- serverC.
All of them are interconnected with 40Gbps Mellanox X5 PCIe v4 cards with optical MPO cable.

All servers are multicore beasts with two CPUs, all of them have HyperThreading off.

- serverA (16 Xeon cores) is sending iperf3 traffic (small packets created by lower MTU 90B set on 40Gbps port) to serverC (32 Xeon cores)
- serverB (128 Epyc cores) is setup as a linux bridge

Server A is able to send around 20 Milions of iperf3 packets (Mpps) to server C through server B (no iptables/ebtables rules)


Problem is when I add HFSC config, even simplified as much as possible, to a 40Gbps port on tested server B. Speed in packets drops down to some 2,6 Mpps, while CPUs are nearly idling.


This is the HFSC config. It is just a minimalistic config -  I`m looking for HFSC/linux bridge/... limits and I narrowed the problem down to this:


# root qdisc a class:
qdisc add dev eth0 root handle 1: hfsc default ffff
class add dev eth0 parent 1: classid 1:1 hfsc ls m2 34gbit ul m2 34gbit
# default qdisc a class
 class add dev eth0 parent 1:1 classid 1:ffff hfsc ls m2 34gbit ul m2 34gbit
 qdisc add dev eth0 parent 1:ffff handle ffff:0 sfq perturb 5

--> all iperf3 traffic is passing th default class ffff, which is expected in this testing setup (so no filters/iptables classifies/ipsets are not necessary)
--> this is enough to have just 2,6 Mpps instead of 20Mpps...


What I`m I missing?

Thank you.
Pep.






[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux