On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote: > > Dear kernel maintainers, > > > > This submission is a kernel driver to support Intel(R) Gaussian & Neural > > Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor > > available on multiple Intel platforms. AI developers and users can offload > > continuous inference workloads to an Intel(R) GNA device in order to free > > processor resources and save power. Noise reduction and speech recognition > > are the examples of the workloads Intel(R) GNA deals with while its usage > > is not limited to the two. > > How does this compare with the "nnpi" driver being proposed here: > https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@xxxxxxxxx > > Please work with those developers to share code and userspace api and > tools. Having the community review two totally different apis and > drivers for the same type of functionality from the same company is > totally wasteful of our time and energy. Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training. We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including - Intel/Mobileye EyeQ - Intel/Movidius Keembay - Nvidia NVDLA - Gyrfalcon Lightspeeur - Apple Neural Engine - Google TPU - Arm Ethos plus many more that are somewhat less likely to gain fully open source driver stacks. Arnd