On Mon, Sep 25, 2023 at 12:29 PM Cancan Chang <Cancan.Chang@xxxxxxxxxxx> wrote: > > Thank you for your reply from Jagan & Oded. > > It is very appropritate for my driver to be placed in driver/accel. > > My accelerator is named ADLA(Amlogic Deep Learning Accelerator). > It is an IP in SOC,mainly used for neural network models acceleration. > It will split and compile the neural network model into a private format cmd buffer, > and submit this cmd buffer to ADLA hardware. It is not programmable device. What exactly does it mean to "submit this cmd buffer to ADLA hardware" ? Does your h/w provides queues for the user/driver to put their workloads/cmd-bufs on them ? And does it provide some completion queue to notify when the work is completed? Or do you handle one cmd at a time, where the user sends a cmd buffer to the driver and the driver then submit it by writing to a couple of registers and polls on some status register until its done, or waits for an interrupt to mark it as done ? > > ADLA includes four hardware engines: > RS engines : working for the reshape operators > MAC engines : working for the convolution operators > DW engines : working for the planer & Elementwise operators > Activation engines : working for activation operators(ReLu,tanh..) > > By the way, my IP is mainly used for SOC, and the current driver registration is through the platform_driver, > is it necessary to switch to drm? This probably depends on the answer to my question above. btw, there are drivers in drm that handle IPs that are part of an SOC, so platform_driver is supported. Oded > > thanks. > > ________________________________________ > 发件人: Oded Gabbay <ogabbay@xxxxxxxxxx> > 发送时间: 2023年9月22日 23:08 > 收件人: Jagan Teki > 抄送: Cancan Chang; linux-media; linux-kernel; Dave Airlie; Daniel Vetter > 主题: Re: kernel.org 6.5.4 , NPU driver, --not support (RFC) > > [你通常不会收到来自 ogabbay@xxxxxxxxxx 的电子邮件。请访问 https://aka.ms/LearnAboutSenderIdentification,以了解这一点为什么很重要;] > > [ EXTERNAL EMAIL ] > > On Fri, Sep 22, 2023 at 12:38 PM Jagan Teki <jagan@xxxxxxxxxx> wrote: > > > > On Fri, 22 Sept 2023 at 15:04, Cancan Chang <Cancan.Chang@xxxxxxxxxxx> wrote: > > > > > > Dear Media Maintainers: > > > Thanks for your attention. Before describing my problem,let me introduce to you what I mean by NPU. > > > NPU is Neural Processing Unit, It is designed for deep learning acceleration, It is also called TPU, APU .. > > > > > > The real problems: > > > When I was about to upstream my NPU driver codes to linux mainline, i meet two problems: > > > 1. According to my research, There is no NPU module path in the linux (base on linux 6.5.4) , I have searched all linux projects and found no organization or comany that has submitted NPU code. Is there a path prepared for NPU driver currently? > > > 2. If there is no NPU driver path currently, I am going to put my NPU driver code in the drivers/media/platform/amlogic/ , because my NPU driver belongs to amlogic. and amlogic NPU is mainly used for AI vision applications. Is this plan suitabe for you? > > > > If I'm correct about the discussion with Oded Gabby before. I think > > the drivers/accel/ is proper for AI Accelerators including NPU. > > > > + Oded in case he can comment. > > > > Thanks, > > Jagan. > Thanks Jagan for adding me to this thread. Adding Dave & Daniel as well. > > Indeed, the drivers/accel is the place for Accelerators, mainly for > AI/Deep-Learning accelerators. > We currently have 3 drivers there already. > > The accel subsystem is part of the larger drm subsystem. Basically, to > get into accel, you need to integrate your driver with the drm at the > basic level (registering a device, hooking up with the proper > callbacks). ofc the more you use code from drm, the better. > You can take a look at the drivers under accel for some examples on > how to do that. > > Could you please describe in a couple of sentences what your > accelerator does, which engines it contains, how you program it. i.e. > Is it a fixed-function device where you write to a couple of registers > to execute workloads, or is it a fully programmable device where you > load compiled code into it (GPU style) ? > > For better background on the accel subsystem, please read the following: > https://docs.kernel.org/accel/introduction.html > This introduction also contains links to other important email threads > and to Dave Airlie's BOF summary in LPC2022. > > Thanks, > Oded