On Thu, 9 Apr 2020 23:07:42 +0000 Saeed Mahameed wrote: > > My concern is that driver may allocate a full page for each frame but > > only DMA map the amount that can reasonably contain data given the > > MTU. > > To save on DMA syncs. > > > > Today that wouldn't be a problem, because XDP_REDIRECT will re-map > > the > > page, and XDP_TX has the same MTU. > > I am not worried about dma at all, i am worried about the xdp progs > which are now allowed to extend packets beyond the mtu and do XDP_TX. > but as i am thinking about this i just realized that this can already > happen with xdp_adjust_head().. > > but as you stated above this puts alot of assumptions on how driver > should dma rx buffs > > > In this set xdp_data_hard_end is used both to find the end of memory > > buffer, and end of DMA buffer. Implementation of > > bpf_xdp_adjust_tail() > > assumes anything < SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) > > from > > the end is fair game. > > but why skb_shared_info in particular though ? this assumes someone > needs this tail for building skbs .. looks weird to me. Fair, simplifies the internals, I guess. > > So I was trying to say that we should warn driver authors that the > > DMA > > buffer can now grow / move beyond what the driver may expect in > > XDP_TX. > > Ack, but can we do it by desing ? i.e instead of having hardcoded > limits (e.g. SKB_DATA_ALIGN(shinfo)) in bpf_xdp_adjust_tail(), let the > driver provide this, or any other restrictions, e.g mtu for tx, or > driver specific memory model restrictions .. Right, actually for NFP we need to add the check already - looking at the code - the DMA mapping does not cover anything beyond the headroom + MTU. > > Drivers can either DMA map enough memory, or handle the corner case > > in > > a special way. > > > > IDK if that makes sense, we may be talking past each other :)