Hi Jakub, Thanks for your feedback. On Sat, Jun 8, 2019 at 2:19 PM Jakub Kicinski <jakub.kicinski@xxxxxxxxxxxxx> wrote: > > On Fri, 7 Jun 2019 19:55:34 -0700, William Tu wrote: > > Hi, > > > > When using AF_XDP, the TC qdisc layer is by-passed and packets go to > > userspace directly. One problem is that there is no QoS support when > > using AF_XDP. > > > > For egress shaping, I'm thinking about using tc-mqprio, which has > > hardware offload support. And for OVS, we can add tc-mqprio support. > > What is your end game? Once upon the time Simon was explaining the QoS I thought I could do s.t like 1) combined to 1 queue only using ethtool 2) using AF_XDP in OVS to send packets to queue0, 3) program the mqprio to do some ratelimit, and set prio by SO_PRIORITY ex: use queue 0-1, 2-3, and 4 tc qdisc add dev eth3 root mqprio num_tc 3 map 0 0 0 0 1 1 1 2 queues 2@0 2@2 1@4 > stuff in OvS to me, but IIRC it used CBQ and/or HTB. The XDP TX queues > are not exposed to the stack, so we can't set per-queue QoS, setting a > root Qdisc (like mqprio) and expecting the XDP queues to have the same > settings would be very limiting (then again even with mqprio IDK how > you'd select the prio? by using the TX queue ID? hm..). > I see. So the hw queues used by AF_XDP is the same queues used by QoS? Then I guess the above command won't work. > > For ingress policing, I don't know how to do it. Is there an hardware > > offload ingress policing support? > > There is support for act_police in a couple drivers. Although using it > per queue could be a challenge... (At least we do have a real queue ID > on the RX, hopefully the mlx5 fake queues never get merged.) Regards, William