Alexander Lobakin wrote: > From: Willem De Bruijn <willemdebruijn.kernel@xxxxxxxxx> > Date: Tue, 19 Nov 2024 10:14:28 -0500 > > > Alexander Lobakin wrote: > >> From: Willem De Bruijn <willemdebruijn.kernel@xxxxxxxxx> > >> Date: Sat, 16 Nov 2024 10:31:08 -0500 > > [...] > > >> libeth_xdp depends on every patch from the series. I don't know why you > >> believe this might anyhow move faster. Almost the whole series got > >> reviewed relatively quickly, except drivers/intel folder which people > >> often tend to avoid. > > > > Smaller focused series might have been merged already. > > Half of this series merged wouldn't change that the whole set wouldn't > fit into one window (which is what you want). Half of this series merged > wouldn't allow sending idpf XDP parts. I meant that smaller series are easier to facilitate feedback and iterate on quickly. So multiple focused series can make the same window. > > > >> I remind you that the initial libeth + iavf series (11 patches) was > >> baking on LKML for one year. Here 2 Chapters went into the kernel within > >> 2 windows and only this one (clearly much bigger than the previous ones > >> and containing only generic changes in contrary to the previous which > >> had only /intel code) didn't follow this rule, which doesn't > >> unnecessarily mean it will stuck for too long. > >> > >> (+ I clearly mentioned several times that Chapter III will take longer > >> than the rest and each time you had no issues with that) > > > > This is a misunderstanding. I need a working feature, on a predictable > > timeline, in distro kernels. > > Predictable timeline is not about upstream. At least when it comes to > series which introduce a lot of generic changes / additions. > A good example is PFCP offload in ice, the initial support was done and > sent spring 2022, then it took almost 2 years until it landed into the > kernel. The first series was of good quality, but there'll always be > discussions, different opinions etc. > > I've no idea what misunderstanding are you talking about, I quoted what > Oregon told me quoting you. The email I sent with per-patch breakdown > why none of them can be tossed off to upstream XDP for idpf, you seemed > to ignore, at least I haven't seen any reply. I've no idea what they > promise you each kernel release, but I haven't promised anything except > sending first working RFC by the end of 2023, which was done back then; > because promising that feature X will definitely land into upstream > release Y would mean lying. There's always risk even a small series can > easily miss 1-3 kernel releases. > Take a look at Amit's comment. It involves additional work which I > didn't expect. I'm planning to do it while the window is closed as the > suggestion is perfectly valid and I don't have any arguments against. > Feel free to go there and argue that the comment is not valid because > you want the series merged ASAP, if you think that this "argument" works > upstream. > > > > >>> > >>> The first 3 patches are not essential to IDFP XDP + AF_XDP either. > >> > >> You don't seem to read the code. libeth_xdp won't even build without them. > > > > Not as written, no, obviously. > > If you want to compare with the OOT implementation for the 10th time, > let me remind you that it differs from the upstream version of idpf a > ton. OOT driver still doesn't use Page Pool (without which idpf wouldn't > have been accepted upstream at all), for example, which automatically > drops the dependency from several big patches from this series. OOT > implementation performs X times worse than the upstream ice. It still > forces header split to be turned off when XDP prog is installed. It > still uses hardcoded Rx buffer sizes. I can continue enumerating things > from OOT unacceptable here in upstream forever. I was not referring to the OOT series. See below. > > > >> I don't believe the model taken by some developers (not spelling names > >> loud) "let's submit minimal changes and almost draft code, I promise > >> I'll create a todo list and will be polishing it within next x years" > >> works at all, not speaking that it may work better than sending polished > >> mature code (I hope it is). > >> > >>> The IDPF feature does not have to not depend on them. > >>> > >>> Does not matter for upstream, but for the purpose of backporting this > >>> to distro kernels, it helps if the driver feature minimizes dependency > >>> on core kernel API changes. If patch 19 can be made to work without > >> > >> OOT style of thinking. > >> Minimizing core changes == artificial self-limiting optimization and > >> functionality potential. > >> New kernels > LTSes and especially custom kernels which receive > >> non-upstream (== not officially supported by the community) feature > >> backports. Upstream shouldn't sacrifice anything in favor of those, this > >> way we end up one day sacrificing stuff for out-of-tree drivers (which I > >> know some people already try to do). > > > > Opinionated positions. Nice if you have unlimited time. > > I clearly remember Kuba's position that he wants good quality of > networking core and driver code. I'm pretty sure every netdev maintainer > has the same position. Again, feel free to argue with them, saying they > must take whatever trash is sent to LKML because customer X wants it > backported to his custom kernel Y ASAP. Not asking for massive changes, just suggesting a different patch order. That does not affect code quality. The core feature set does not depend on loop unrolling, constification, removal of xdp_frame::mem.id, etcetera. These can probably be reviewed and merged more quickly independent from this driver change, even. Within IDPF, same for for per queue(set) link up/down and chunked virtchannel messages. A deeper analysis can probably carve out other changes not critical to landing XDP/XSK (sw marker removal). > > > >>> some of the changes in 1..18, that makes it more robust from that PoV. > >> > >> No it can't, I thought people first read the code and only then comment, > >> otherwise it's just wasting time. > > Thanks, > Olek