On Sat, Apr 6, 2024 at 9:05 AM Alexander Duyck <alexander.duyck@xxxxxxxxx> wrote: > > > > You have an unavailable NIC, so we know it is only ever operated with > > > > Meta's proprietary kernel fork, supporting Meta's proprietary > > > > userspace software. Where exactly is the open source? > > > > > > It depends on your definition of "unavailable". I could argue that for > > > many most of the Mellanox NICs are also have limited availability as > > > they aren't exactly easy to get a hold of without paying a hefty > > > ransom. > > > > And GNIC's that run Mina's series are completely unavailable right > > now. That is still a big different from a temporary issue to a > > permanent structural intention of the manufacturer. > > I'm assuming it is some sort of firmware functionality that is needed > to enable it? One thing with our design is that the firmware actually > has minimal functionality. Basically it is the liaison between the > BMC, Host, and the MAC. Otherwise it has no role to play in the > control path so when the driver is loaded it is running the show. > Sorry, I didn't realize our devmem TCP work was mentioned in this context. Just jumping in to say, no, this is not the case, devmem TCP does not require firmware functionality AFAICT. The selftest provided with the devmem TCP series should work on any driver that: 1. supports header split/flow steering/rss/page pool (I guess this support may need firmware changes...). 2. supports the new queue configuration ndos: https://patchwork.kernel.org/project/netdevbpf/patch/20240403002053.2376017-2-almasrymina@xxxxxxxxxx/ 3. supports the new netmem page_pool APIs: https://patchwork.kernel.org/project/netdevbpf/patch/20240403002053.2376017-8-almasrymina@xxxxxxxxxx/ No firmware changes specific to devmem TCP are needed, AFAICT. All these are driver changes. I also always publish a full branch with all the GVE changes so reviewers can check if there is anything too specific to GVE that we're doing, so far there are been no issues, and to be honest I can't see anything specific that we do with GVE for devmem TCP: https://github.com/mina/linux/commits/tcpdevmem-v7/ In fact, GVE is IMO a relatively feature light driver, and the fact that GVE can do devmem TCP IMO makes it easier for fancier NICs to also do devmem TCP. I'm working with folks interested in extending devmem TCP to their drivers, and they may follow up with patches after the series is merged (or before). The only reason I haven't implemented devmem TCP for multiple different drivers is a logistical one. I don't have access to hardware that supports all these prerequisite features other than GVE. -- Thanks, Mina