Re: AF_XDP Side Of Project Breaking With XDP-Native

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey David,


Thank you for your response and the information! That cleared a lot of things up for me!


Yes, this is with Vultr.


As of right now, the packet processing software I'm using forwards traffic to another server via XDP_TX. It also drops any traffic via XDP_DROP that doesn't match our filters (these filters aren't included in the open-source project linked below). Do you know if there would be any real performance advantage using XDP-native over XDP-generic in our case with the `virtio_net` driver for XDP_TX and XDP_DROP actions? We're currently battling (D)DoS attacks. Therefore, I'm trying to do everything I can to drop these packets as fast as possible.


If you would like to inspect the source code for this project, here's a link to the GitHub repository:


https://github.com/Dreae/compressor


I'm also working on a bigger open-source project with a friend that'll drop traffic based off of filtering rules with XDP (it'll be version two of the project I linked above) and we plan to use it on VMs with the `virtio_net` driver. Therefore, it'll be useful to know if XDP-native will provide a performance advantage over XDP-generic when dropping packets.


Thank you for all your help so far. I appreciate it!


On 5/24/2020 1:58 PM, David Ahern wrote:
On 5/24/20 12:13 PM, Christian Deacon wrote:
Hey David,


Thank you for your response!


The VM only has one CPU right now. It's possible the cluster has 8 RX
queues I'd imagine, but I don't have that information sadly. I executed
the same command on another VM I have with two CPUs (not being used for
the XDP-native testing):


```

root@Test:~# ethtool -l ens3
Channel parameters for ens3:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       2
```
That's odd that they give you 8 queues for a 1 cpu VM. This is vultr? I
may have to spin up a VM there and check it out.


I did receive this from my hosting provider when asking which NIC driver
they use:
...

I agree with the provider - the hardware nic's are not relevant to the VM.

To my understanding, if the NIC isn't offloading packets directly to our
VPS, wouldn't this destroy the purpose of using XDP-native over
XDP-generic/SKB mode for performance in our case? I was under the
assumption that was the point of XDP-native. If so, I'm not sure why the
program is loading with XDP-native without any issues besides the AF_XDP
program.
The host is essentially the network to your VM / VPS. What data
structure it uses is not relevant to what you want to do inside the VM.
Right now there are a lot of missing features for the host OS to rely
solely on XDP frames.

Inside the VM kernel, efficiency of XDP depends on what you are trying
to do.

A 1 or 2-cpu VM with 8 queues meets the resource requirement for XDP
programs; I am not familiar with the details on AF_XDP to know if some
kind of support is missing inside the virtio driver.


I will admit I've been wondering what the difference is between
`XDP_FLAGS_DRV_MODE` (XDP-native) and `XDP_FLAGS_HW_MODE` since I
thought XDP-native was offloading packets from the NIC.
H/W mode means the program is pushed down to the hardware. I believe
only netronome's nic currently does offload. Some folks have discussed
offloading programs for the virtio NIC, but that does not work today.



[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux