virtio and XDP: How to do flow steering + shared umem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I set up a kvm with virt-manager using Debian buster upgraded to Kernel 5.4 to test my AF-XDP program. I am using the NIC which I usually use on my host-server with macvtap driver set to virtio and passthrough inside the guest system.

The first time I started my program inside the VM I noticed this error message:

	virtio_net virtio0 eth1: request 2 queues but max is 1.

After searching Google I found the solution. Changing my interface config to this (always with just 1 virtual CPU):

   <interface type='direct' trustGuestRxFilters='yes'>
      <mac address='52:54:00:b7:7d:c2'/>
      <source dev='eth20' mode='passthrough'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

To observe what's happening on which RX-Queue I setup up a BPF_MAP_TYPE_ARRAY in my XDP-program which contains the amount of packets arrived on each RX-Queue (observed with `bpftool map dump id xx`):

	const int rx_queue_idx = ctx->rx_queue_index;
	unsigned long *idx_counter = bpf_map_lookup_elem(&rx_queue_pckt_counter_map, &rx_queue_idx);
	if(idx_counter != NULL) {
		  *idx_counter += 1;
	}

Because I found no way how to do Receive Side Steering with macvtap (KVM/QEMU virtio (macvtap): Is RSS possible?) I am only testing a single RX-Queue with multiple AF-XDP sockets (they are sharing the same umem). Each socket is responsible to process packets from a single multicast-source.

Now what I am observing is that some packets are received on another queue even though I set

	sudo ethtool -L eth1 combined 1

In case I am trying to process one multicast-source (this means `setsockopt(fd, IP_ADD_MULTICAST)`), the bpf-map looks like this:

	$ sudo bpftool map dump id 57
	key: 00 00 00 00  value: 00 00 00 00 00 00 00 00
	key: 01 00 00 00  value: 29 dd 2d 00 00 00 00 00

It's a bit odd that virtio doesn't start with the first RX-Queue but okay, I can adapt my program. Packet loss is around 0.2% total at around 270kpps.

In case I use two multicast streams (again with IP_ADD_MULTICAST), this is the bpf-map:

	$ sudo bpftool map dump id 61
	key: 00 00 00 00  value: 00 00 00 00 00 00 00 00
	key: 01 00 00 00  value: 76 a4 4b 00 00 00 00 00

Packet loss is now at around 16% at 455kpps which is nowhere near the original performance (is this normal??).

In case I add another multicast stream (3 total), this is the bpf-map:

	$ sudo bpftool map dump id 65
	key: 00 00 00 00  value: 2f 6a 0e 00 00 00 00 00
	key: 01 00 00 00  value: 77 d9 20 00 00 00 00 00

As you can see, there are packets on RX-Queue 0 and RX-Queue 1. Why is that and how can I prevent that from happening?

I don't understand why ethtool shows me this: 

	$ sudo ethtool -S eth1
	NIC statistics:
		 rx_queue_0_packets: 80341229
		 rx_queue_0_bytes: 119871244125
		 rx_queue_0_drops: 79419999
		 rx_queue_0_xdp_packets: 80332151
		 rx_queue_0_xdp_tx: 0
		 rx_queue_0_xdp_redirects: 79922548
		 rx_queue_0_xdp_drops: 79419999
		 rx_queue_0_kicks: 1586
		 tx_queue_0_packets: 165
		 tx_queue_0_bytes: 13552
		 tx_queue_0_xdp_tx: 0
		 tx_queue_0_xdp_tx_drops: 0
		 tx_queue_0_kicks: 165

even though I have evidence that there are multiple RX-queues where packets are received on?

To summarize my problem:

- Somehow I have more than one RX-Queue even though I set `ethtool -L eth1 combined 1`.
- Furthermore, I am fine with multiple RX-Queues but not if I am not able to decide where packets arrive via `ethtool -N eth1 flow-type udp4 ...`.



[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux