On Sun, 18 Aug 2019 00:34:33 +0530 Akshat Kakkar <akshat.1984@xxxxxxxxx> wrote: > My goal is not just to make as many classes as possible, but also to > use them to do rate limiting per ip per server. Say, I have a list of > 10000 IPs and more than 100 servers. So simply if I want few IPs to > get speed of says 1Mbps per server but others say speed of 2 Mbps per > server. How can I achieve this without having 10000 x 100 classes. > These numbers can be large than this and hence I am looking for a > generic solution to this. As Eric Dumazet also points out indirectly, you will be creating a huge bottleneck for SMP/multi-core CPUs. As your HTB root qdisc is a serialization point for all egress traffic, that all CPUs will need to take a lock on. It sounds like your use-case is not global rate limiting, but instead the goal is to rate limit customers or services (to something significantly lower than NIC link speed). To get scalability, in this case, you can instead use the MQ qdisc (as Eric also points out). I have an example script here[1], that shows how to setup MQ as root qdisc and add HTB leafs based on how many TX-queue the interface have via /sys/class/net/$DEV/queues/tx-*/ [1] https://github.com/xdp-project/xdp-cpumap-tc/blob/master/bin/tc_mq_htb_setup_example.sh You are not done, yet. For solving the TX-queue locking congestion, the traffic needs to be redirected to the appropriate/correct TX CPUs. This can either be done with RSS (Receive Side Scaling) HW ethtool adjustment (reduce hash to IPs L3 only), or RPS (Receive Packet Steering), or with XDP cpumap redirect. The XDP cpumap redirect feature is implemented with XDP+TC BPF code here[2]. Notice, that XPS can screw with this so there is a XPS disable script here[3]. [2] https://github.com/xdp-project/xdp-cpumap-tc [3] https://github.com/xdp-project/xdp-cpumap-tc/blob/master/bin/xps_setup.sh -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer