On Fri, Sep 18, 2020 at 03:31:51PM +0300, Oded Gabbay wrote: > On Fri, Sep 18, 2020 at 3:19 PM Leon Romanovsky <leon@xxxxxxxxxx> wrote: > > > > On Fri, Sep 18, 2020 at 03:07:19PM +0300, Oded Gabbay wrote: > > > On Fri, Sep 18, 2020 at 3:03 PM Leon Romanovsky <leon@xxxxxxxxxx> wrote: > > > > > > > > On Fri, Sep 18, 2020 at 02:56:09PM +0300, Oded Gabbay wrote: > > > > > On Fri, Sep 18, 2020 at 2:52 PM Leon Romanovsky <leon@xxxxxxxxxx> wrote: > > > > > > > > > > > > On Fri, Sep 18, 2020 at 02:36:10PM +0300, Gal Pressman wrote: > > > > > > > On 17/09/2020 20:18, Jason Gunthorpe wrote: > > > > > > > > On Tue, Sep 15, 2020 at 11:46:58PM +0300, Oded Gabbay wrote: > > > > > > > >> infrastructure for communication between multiple accelerators. Same > > > > > > > >> as Nvidia uses NVlink, we use RDMA that we have inside our ASIC. > > > > > > > >> The RDMA implementation we did does NOT support some basic RDMA > > > > > > > >> IBverbs (such as MR and PD) and therefore, we can't use the rdma-core > > > > > > > >> library or to connect to the rdma infrastructure in the kernel. > > > > > > > > > > > > > > > > You can't create a parallel RDMA subsystem in netdev, or in misc, and > > > > > > > > you can't add random device offloads as IOCTL to nedevs. > > > > > > > > > > > > > > > > RDMA is the proper home for all the networking offloads that don't fit > > > > > > > > into netdev. > > > > > > > > > > > > > > > > EFA was able to fit into rdma-core/etc and it isn't even RoCE at > > > > > > > > all. I'm sure this can too. > > > > > > > > > > > > > > Well, EFA wasn't welcomed to the RDMA subsystem with open arms ;), initially it > > > > > > > was suggested to go through the vfio subsystem instead. > > > > > > > > > > > > > > I think this comes back to the discussion we had when EFA was upstreamed, which > > > > > > > is what's the bar to get accepted to the RDMA subsystem. > > > > > > > IIRC, what we eventually agreed on is having a userspace rdma-core provider and > > > > > > > ibv_{ud,rc}_pingpong working (or just supporting one of the IB spec's QP types?). > > > > > > > > > > > > > > Does GAUDI fit these requirements? If not, should it be in a different subsystem > > > > > > > or should we open the "what qualifies as an RDMA device" question again? > > > > > > > > > > > > I want to remind you that rdma-core requirement came to make sure that > > > > > > anything exposed from the RDMA to the userspace is strict with proper > > > > > > UAPI header hygiene. > > > > > > > > > > > > I doubt that Havana's ioctls are backed by anything like this. > > > > > > > > > > > > Thanks > > > > > > > > > > Why do you doubt that ? Have you looked at our code ? > > > > > Our uapi and IOCTLs interface is based on drm subsystem uapi interface > > > > > and it is very safe and protected. > > > > > > > > Yes, I looked and didn't find open-source users of your UAPI headers. > > > > It is not related to being safe or protected by to the common request > > > > to present userspace that relies on those exported interfaces. > > > > > > > > > Otherwise Greg would have never allowed me to go upstream in the first place. > > > > > > > > Nice, can we get a link? > > > > > > > > > > > > > > We have a single function which is the entry point for all the IOCTLs > > > > > of our drivers (only one IOCTL is RDMA related, all the others are > > > > > compute related). > > > > > That function is almost 1:1 copy of the function in drm. > > > > > > > > DRM has same rules as RDMA, no kernel code will be merged without seeing > > > > open-source userspace. > > > > > > > > Thanks > > > > > > > > > > > > > > Thanks, > > > > > Oded > > > > > > So we do have an open-source library called hl-thunk, which uses our > > > driver and indeed that was part of the requirement. > > > It is similar to libdrm. > > > Here is the link: > > > https://github.com/HabanaAI/hl-thunk > > > > Are you kidding? > > > > This is mirror of some internal repository that looks like dumpster > > with ChangeId, internal bug tracker numbers, not part of major OS > > distributions. > > > > It is not open-source library and shows very clear why you chose > > to upstream your driver through driver/misc/ tree. > > > > Thanks > > Adding Olof here. > > No, usually not. > But are you kidding ? > What did you exactly expect to find ? Is there an open-source project > somewhere that encapsulates Deep-learning accelerators which I could > connect to ? I would expect certain level of code quality, collaboration and review that distros require for inclusion. It is not the case for the github repo you presented. > AFAIK, the only thing remotely relevant is CUDA and that is > closed-source (strange to hear lectures about open-source from NVIDIA > people here...) Please check git log statistics to estimate Nvidia/Mellanox/Cumulus contributions to the Linux kernel and the open-source. You will be surprised. > > So we are trying to give to the community such an open source library, > or at least an example. Hopefully one day, when more companies > upstream their drivers for deep-learning accelerators we could do > something like libdrm or rdma-core, but for now, it's just our driver. AFAIR, your driver is not unique, HiSilicon tried to submit something similar years ago (warpdrive) and they are not alone. > > I have been in this community since 2013 with AMD and then RedHat, and > I come with good intentions and a desire to open source and upstream > as much as I can. I don't think I deserve this kind of response. There is no need to take it personal. It was you who posted a link to the github repo. What did you expect? > > The bottom line is that we had this discussion with Greg and Olof and > DRM people almost 2 years ago and if there was some open-source > project in user-space or some subsystem in the kernel we could connect > to, we would have done that instead of what we did, but the fact of > the matter there isn't such thing. Olof tried and is trying to create > a h/w accelerator subsystem but it still hasn't got up from the ground > yet. Maybe it is a time to do it right. > > Oded