On 2021-10-14 10:32, Tariq Toukan wrote:
On 10/13/2021 6:02 PM, Arnd Bergmann wrote:
From: Arnd Bergmann <arnd@xxxxxxxx>
When building with 64KB pages, clang points out that xsk->chunk_size
can never be PAGE_SIZE:
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c:19:22: error:
result of comparison of constant 65536 with expression of type 'u16'
(aka 'unsigned short') is always false
[-Werror,-Wtautological-constant-out-of-range-compare]
if (xsk->chunk_size > PAGE_SIZE ||
~~~~~~~~~~~~~~~ ^ ~~~~~~~~~
I'm not familiar with the details of this code, but from a quick look
I found that it gets assigned from a 32-bit variable that can be
PAGE_SIZE, and that the layout of 'xsk' is not part of an ABI or
a hardware structure, so extending the members to 32 bits as well
should address both the behavior on 64KB page kernels, and the
warning I saw.
This change is not enough to fix the behavior. mlx5e_xsk_is_pool_sane
checks that chunk_size <= 65535. Your patch just silences the warning,
but doesn't improve 64 KB page support.
While mlx5e_xsk_is_pool_sane is simply a sanity check, it's not enough
to remove it to support 64 KB pages. It will need careful review of
assumptions in data path, because many places use 16-bit values for
packet size and headroom, and it comes down to the hardware interface
(see mpwrq_get_cqe_byte_cnt - the hardware passes the incoming packet
size as a 16-bit value) and hardware limitations.
For example, MLX5_MPWQE_LOG_STRIDE_SZ_MAX is defined to 13 (Tariq, is it
a hardware limitation or just an arbitrary value?), which means the max
stride size in striding RQ is 2^13 = 8192, which will make the driver
fall back to legacy RQ (slower). We also need to check if legacy RQ
works fine with such large buffers, but so far I didn't find any reason
why not.
I genuinely think allocating 64 KB per packet is waste of memory, and
supporting it isn't very useful, as long as it's possible to use smaller
frame sizes, and given that it would be slower because of lack of
striding RQ support.
It could be implemented as a feature for net-next, though, but only
after careful testing and expanding all relevant variables (the hardware
also uses a few bits as flags, so the max frame size will be smaller
than 2^32 anyway, but hopefully bigger than 2^16 if there are no other
limitations).
For net, I suggest to silence the warning in some other way (cast type
before comparing?)
In older versions of this code, using PAGE_SIZE was the only
possibility, so this would have never worked on 64KB page kernels,
but the patch apparently did not address this case completely.
Fixes: 282c0c798f8e ("net/mlx5e: Allow XSK frames smaller than a page")
Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx>
---
drivers/net/ethernet/mellanox/mlx5/core/en/params.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
index 879ad46d754e..b4167350b6df 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
@@ -7,8 +7,8 @@
#include "en.h"
struct mlx5e_xsk_param {
- u16 headroom;
- u16 chunk_size;
+ u32 headroom;
+ u32 chunk_size;
Hi Arnd,
I agree with your arguments about chunk_size.
Yet I have mixed feelings about extending the headroom. Predating
in-driver code uses u16 for headroom (i.e. [1]), while
xsk_pool_get_headroom returns u32.
[1] drivers/net/ethernet/mellanox/mlx5/core/en/params.c ::
mlx5e_get_linear_rq_headroom
As this patch is a fix, let's keep it minimal, only addressing the issue
described in title and description.
We might want to move headroom to u32 all around the driver in a
separate patch to -next.
};
struct mlx5e_lro_param {