On Fri, May 26, 2017 at 10:56:25AM -0700, Alexei Starovoitov wrote: > > for that feature which is the originating place, before defining > > APIs/infrastructures, > > until the feature is complete and every body is happy about it. > > There is driver/fpga to manage fpga, but mlx fpga+nic combo > will be managed via mlx5/core/fpga/ > > Adding fpga folks for visibility. It would be good to use the existing fpga loading infrastructure to get the bitstream into the NIC, and to use the same Xilinx bitstream format as eg Zynq does for consistency. I'm unclear how this works - there must be more to it than just a 'bump on the wire', there must be some communication channel between the FPGA and Linux to set operational data (eg load keys) etc. If that is register mapped into a PCI-BAR someplace then it really should use the FPGA layer functions to manage binding drivers to that register window. If it is mailbox command based then it is not as good of a fit. Is this FPGA expected to be customer programmable? In that case you really need the full infrastructure to bind the right driver (possibly a customer driver) to the current FPGA, to expose the correct operational interface to the kernel. Jason -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html