Re: [PATCH net-next 6/6] net: mvneta: enable jumbo frames for XDP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Jakub Kicinski wrote:
> > On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:
> > > > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:  
> > > > > Enable the capability to receive jumbo frames even if the interface is
> > > > > running in XDP mode
> > > > > 
> > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>  
> > > > 
> > > > Hm, already? Is all the infra in place? Or does it not imply
> > > > multi-buffer.
> > > 
> > > with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> > > and ndo_xpd_xmit()) so we can remove MTU limitation.
> > 
> > Is there an API for programs to access the multi-buf frames?
> 
> Hi Lorenzo,

Hi Jakub and John,

> 
> This is not enough to support multi-buffer in my opinion. I have the
> same comment as Jakub. We need an API to pull in the multiple
> buffers otherwise we break the ability to parse the packets and that
> is a hard requirement to me. I don't want to lose visibility to get
> jumbo frames.

I have not been so clear in the commit message, sorry for that.
This series aims to finalize xdp multi-buff support for mvneta driver only.
Our plan is to work on the helpers/metadata in subsequent series since
driver support is quite orthogonal. If you think we need the helpers
in place before removing the mtu constraint, we could just drop last
patch (6/6) and apply patches from 1/6 to 5/6 since they are preliminary
to remove the mtu constraint. Do you agree?

> 
> At minimum we need a bpf_xdp_pull_data() to adjust pointer. In the
> skmsg case we use this,
> 
>   bpf_msg_pull_data(u32 start, u32 end, u64 flags)
> 
> Where start is the offset into the packet and end is the last byte we
> want to adjust start/end pointers to. This way we can walk pages if
> we want and avoid having to linearize the data unless the user actual
> asks us for a block that crosses a page range. Smart users then never
> do a start/end that crosses a page boundary if possible. I think the
> same would apply here.
> 
> XDP by default gives you the first page start/end to use freely. If
> you need to parse deeper into the payload then you call bpf_msg_pull_data
> with the byte offsets needed.

Our first proposal is described here [0][1]. In particular, we are assuming the
eBPF layer can access just the first fragment in the non-linear XDP buff and
we will provide some non-linear xdp metadata (e.g. # of segments in the xdp_buffer
or buffer total length) to the eBPF program attached to the interface.
Anyway IMHO this mvneta series is not strictly related to this approach.

Regards,
Lorenzo

[0] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[1] http://people.redhat.com/lbiancon/conference/NetDevConf2020-0x14/add-xdp-on-driver.html (XDP multi-buffers section)

> 
> Also we would want performance numbers to see how good/bad this is
> compared to the base case.
> 
> Thanks,
> John

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux