On 2019-06-15 04:40, Jakub Kicinski wrote: > On Fri, 14 Jun 2019 13:25:28 +0000, Maxim Mikityanskiy wrote: >> On 2019-06-13 20:29, Jakub Kicinski wrote: >>> On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote: >>> >>> Yes, okay, I get that. But I still don't know what's the exact use you >>> have for AF_XDP buffers being 4k.. Could you point us in the code to >>> the place which relies on all buffers being 4k in any XDP scenario? > > Okay, I still don't get it, but that's for explaining :) Perhaps it > will become clearer when you resping with patch 17 split into > reviewable chunks :) I'm sorry, as I said above, I don't think splitting it is necessary or is a good thing to do. I used to have it separated, but I squashed them to shorten the series and to avoid having stupid /* TODO: implement */ comments in empty functions that are implemented in the next patch. Unsquashing them is going to take more time, which I unfortunately don't have as I'm flying to Netconf tomorrow and then going on vacation. So, I would really like to avoid it unless absolutely necessary. Moreover, it won't increase readability - you'll have to jump between the patches to see the complete implementation of a single function - it's a single feature, after all. >> 1. An XDP program is set on all queues, so to support non-4k AF_XDP >> frames, we would also need to support multiple-packet-per-page XDP for >> regular queues. > > Mm.. do you have some materials of how the mlx5 DMA/RX works? I'd think > that if you do single packet per buffer as long as all packets are > guaranteed to fit in the buffer (based on MRU) the HW shouldn't care > what the size of the buffer is. It's not related to hardware, it helps get better performance by utilizing page pool in the optimal way (without having refcnt == 2 on pages). Maybe Tariq or Saeed could explain it more clearly. >> 2. Page allocation in mlx5e perfectly fits page-sized XDP frames. Some >> examples in the code are: >> >> 2.1. mlx5e_free_rx_mpwqe calls a generic mlx5e_page_release to release >> the pages of a MPWQE (multi-packet work queue element), which is >> implemented as xsk_umem_fq_reuse for the case of XSK. We avoid extra >> overhead by using the fact that packet == page. >> >> 2.2. mlx5e_free_xdpsq_desc performs cleanup after XDP transmits. In case >> of XDP_TX, we can free/recycle the pages without having a refcount >> overhead, by using the fact that packet == page.