On 2019-06-13 21:09, Jakub Kicinski wrote: > On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote: >> On 2019-06-12 23:23, Jakub Kicinski wrote: >>> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote: >>>> Currently, libbpf uses the number of combined channels as the maximum >>>> queue number. However, the kernel has a different limitation: >>>> >>>> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues). >>>> >>>> - ethtool_set_channels() checks for UMEMs in queues up to >>>> combined_count + max(rx_count, tx_count). >>>> >>>> libbpf shouldn't limit applications to a lower max queue number. Account >>>> for non-combined RX and TX channels when calculating the max queue >>>> number. Use the same formula that is used in ethtool. >>>> >>>> Signed-off-by: Maxim Mikityanskiy <maximmi@xxxxxxxxxxxx> >>>> Reviewed-by: Tariq Toukan <tariqt@xxxxxxxxxxxx> >>>> Acked-by: Saeed Mahameed <saeedm@xxxxxxxxxxxx> >>> >>> I don't think this is correct. max_tx tells you how many TX channels >>> there can be, you can't add that to combined. Correct calculations is: >>> >>> max_num_chans = max(max_combined, max(max_rx, max_tx)) >> >> First of all, I'm aligning with the formula in the kernel, which is: >> >> curr.combined_count + max(curr.rx_count, curr.tx_count); >> >> (see net/core/ethtool.c, ethtool_set_channels()). > > curr != max. ethtool code you're pointing me to (and which I wrote) > uses the current allocation, not the max values. The ethtool code uses curr, because it wants to calculate the amount of queues currently in use. libbpf uses max, because it wants to calculate the maximum amount of queues that can be in use. That's the only difference, so the formula should be the same, and this calculation can be applied either to curr or to max. Imagine you have configured the NIC to have the maximum supported amount of channels. Then your formula in ethtool.c returns some value. Exactly the same value should also be returned from libbpf's xsk_get_max_queues(). It's achieved by applying your formula directly to max. >> The formula in libbpf should match it. > > The formula should be based on understanding what we're doing, > not copying some not-really-equivalent code from somewhere :) I have understanding of the code I write. > Combined is a basically a queue pair, RX is an RX ring with a dedicated > IRQ, and TX is a TX ring with a dedicated IRQ. If driver supports both > combined and single purpose interrupt vectors it will most likely set > > max_rx = num_hw_rx > max_tx = num_hw_tx > max_combined = min(rx, tx) OK, I got your example. The driver you are talking about won't support setting rx_count = max_rx, tx_count = max_tx and combined_count = max_combined simultaneously. However, xsk_get_max_queues has to return the maximum number of queues theoretically possible with this device, to create xsks_map of a sufficient size. Currently, it totally ignores devices without combined channels, so max_rx and max_tx have to be considered in the calculation. Next thing is that ethtool API doesn't really tell you whether the device can create up to max_rx RX channels, max_tx TX channels and max_combined combined channels simultaneously, or there are some additional limitations. Your example displays such a limitation, but it's not the only possible one, and this limitation is not even mandatory for all drivers. As ethtool doesn't expose the information about additional limitations imposed by the driver, and as it won't hurt if xsks_map will be bigger than necessary, my vision is that we shouldn't assume any limitations we are not sure about, so max_combined + max(max_rx, max_tx) is the right thing to do. > Like with most ethtool APIs there are some variations to this. > >> Second, the existing drivers have either combined channels or separate >> rx and tx channels. So, for the first kind of drivers, max_tx doesn't >> tell how many TX channels there can be, it just says 0, and max_combined >> tells how many TX and RX channels are supported. As max_tx doesn't >> include max_combined (and vice versa), we should add them up. > > By existing drivers you mean Intel drivers which implement AF_XDP, > and your driver? No, I meant all drivers, not only AF_XDP-enabled ones. I wasn't aware that some of them support the choice between a combined channel and a unidirectional channel, however, I still find my formula correct (see the explanation above). > Both Intel and MLX drivers seem to only set > max_combined. mlx4 doesn't support combined channels, but it's out of scope of this patchset. > If you mean all drivers across the kernel, then I believe the best > formula is what I gave you. > >>>> tools/lib/bpf/xsk.c | 6 +++--- >>>> 1 file changed, 3 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c >>>> index bf15a80a37c2..86107857e1f0 100644 >>>> --- a/tools/lib/bpf/xsk.c >>>> +++ b/tools/lib/bpf/xsk.c >>>> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk) >>>> goto out; >>>> } >>>> >>>> - if (channels.max_combined == 0 || errno == EOPNOTSUPP) >>>> + ret = channels.max_combined + max(channels.max_rx, channels.max_tx); >>>> + >>>> + if (ret == 0 || errno == EOPNOTSUPP) >>>> /* If the device says it has no channels, then all traffic >>>> * is sent to a single stream, so max queues = 1. >>>> */ >>>> ret = 1; >>>> - else >>>> - ret = channels.max_combined; >>>> >>>> out: >>>> close(fd);