On Thu, Mar 10, 2022 at 11:25 PM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > > > Also I don't believe that this patch set and exposing > > bpf_xdp_pointer as a helper actually gives measurable performance wins. > > It looks quirky to me and hard to use. > > This is actually inspired from your idea to avoid memcpy when reading and > writing to multi-buff XDP [0]. But instead of passing in the stack or mem > pointer (as discussed in that thread), I let the user set it and detect it > themselves, which makes the implementation simpler. > > I am sure accessing a few bytes directly is going to be faster than first > memcpy'ing it to a local buffer, reading, and then possibly writing things > back again using a memcpy, but I will be happy to come with some numbers when > I respin this later, when Joanne posts the dynptr series. > > Ofcourse, we could just make return value PTR_TO_MEM even for the 'pass buf > pointer' idea, but then we have to conservatively invalidate the pointer even if > it points to stack buffer on clear_all_pkt_pointers. The current approach looked > better to me. > > [0]: https://lore.kernel.org/bpf/CAADnVQKbrkOxfNoixUx-RLJEWULJLyhqjZ=M_X2cFG_APwNyCg@xxxxxxxxxxxxxx There is a big difference in what was proposed earlier vs what you've implemented. In that proposal bpf_packet_pointer would behave similar to skb_header_pointer. The kernel developers are familiar with it and the concept is tested by time. While your bpf_packet_pointer is different. It fails on frag boundaries, so the user has to open code the checks and add explicit bpf_xdp_load_bytes. I guess you're arguing that it's the same. My point that it's a big conceptual difference in api. This kind of difference should be discussed instead of dumping a patch set. Especially without RFC tag. Please focus on kptr patches. They were close enough and have a chance of landing in this release. This bpf_packet_pointer stuff is certainly not going to make it.