On Wed, Oct 16, 2019 at 08:36:38PM +0200, Jean-Philippe Brucker wrote: > Date: Wed, 16 Oct 2019 20:36:38 +0200 > From: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx> > To: Zhangfei Gao <zhangfei.gao@xxxxxxxxxx> > Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>, Arnd Bergmann > <arnd@xxxxxxxx>, Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>, > jonathan.cameron@xxxxxxxxxx, grant.likely@xxxxxxx, > ilias.apalodimas@xxxxxxxxxx, francois.ozog@xxxxxxxxxx, > kenneth-lee-2012@xxxxxxxxxxx, Wangzhou <wangzhou1@xxxxxxxxxxxxx>, "haojian > . zhuang" <haojian.zhuang@xxxxxxxxxx>, > linux-accelerators@xxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, > linux-crypto@xxxxxxxxxxxxxxx, Kenneth Lee <liguozhu@xxxxxxxxxxxxx>, Zaibo > Xu <xuzaibo@xxxxxxxxxx> > Subject: Re: [PATCH v6 1/3] uacce: Add documents for uacce > Message-ID: <20191016183638.GB1533448@lophozonia> > > Hi, > > I already commented on the interface in patch 2/3, so I just have a few > additional comments on the documentation itself. > > On Wed, Oct 16, 2019 at 04:34:31PM +0800, Zhangfei Gao wrote: > > +The user API > > +------------ > > + > > +We adopt a polling style interface in the user space: :: > > + > > + int wd_request_queue(struct wd_queue *q); > > + void wd_release_queue(struct wd_queue *q); > > + int wd_send(struct wd_queue *q, void *req); > > + int wd_recv(struct wd_queue *q, void **req); > > + int wd_recv_sync(struct wd_queue *q, void **req); > > + void wd_flush(struct wd_queue *q); > > + > > +wd_recv_sync() is a wrapper to its non-sync version. It will trap into > > +kernel and wait until the queue become available. > > + > > +If the queue do not support SVA/SVM. The following helper functions > > +can be used to create Static Virtual Share Memory: :: > > + > > + void *wd_reserve_memory(struct wd_queue *q, size_t size); > > + int wd_share_reserved_memory(struct wd_queue *q, > > + struct wd_queue *target_q); > > + > > +The user API is not mandatory. It is simply a suggestion and hint what the > > +kernel interface is supposed to be. > > Maybe move this to the beginning of the section, to make it clear that > you're describing an example API. On first read I found it odd that we're > documenting a userspace library in this document. > > [...] > > + > > +The Memory Sharing Model > > +------------------------ > > +The perfect form of a Uacce device is to support SVM/SVA. We built this upon > > +Jean Philippe Brucker's SVA patches. [1] > > I don't think this belongs in the doc, more on a cover letter. Since the > SVA API is now upstream (implementation in progress), you could simply say > something like "the uacce device is built around the IOMMU SVA API". > > > + > > +If the hardware support UACCE_DEV_SVA, the user process's page table is > > +shared to the opened queue. So the device can access any address in the > > +process address space. > > +And it can raise a page fault if the physical page is not available yet. > > +It can also access the address in the kernel space, which is referred by > > +another page table particular to the kernel. Most of IOMMU implementation can > > +handle this by a tag on the address request of the device. For example, ARM > > +SMMU uses SSV bit to indicate that the address request is for kernel or user > > +space. > > That might be a bit too detailed, you can just say that the device can > access multiple address spaces, including the one without PASID. All IOMMU > architectures with PASID support this now. > > > +Queue file regions can be used: > > +UACCE_QFRT_MMIO: device mmio region (map to user) > > +UACCE_QFRT_DUS: device user share (map to dev and user) > > + > > +If the device does not support UACCE_DEV_SVA, Uacce allow only one process at > > +the same time. DMA API cannot be used as well, since Uacce will create an > > +unmanaged iommu_domain for the device. > > +Queue file regions can be used: > > +UACCE_QFRT_MMIO: device mmio region (map to user) > > +UACCE_QFRT_DKO: device kernel-only (map to dev, no user) > > +UACCE_QFRT_DUS: device user share (map to dev and user) > > +UACCE_QFRT_SS: static shared memory (map to devs and user) > > + > > + > > +The Fork Scenario > > +================= > > +For a process with allocated queues and shared memory, what happen if it forks > > +a child? > > + > > +The fd of the queue will be duplicated on folk, so the child can send request > > +to the same queue as its parent. But the requests which is sent from processes > > +except for the one who opens the queue will be blocked. > > Would it be correct and clearer to say "The fd of the queue is duplicated > on fork, but requests sent from the child process are blocked"? > > > + > > +It is recommended to add O_CLOEXEC to the queue file. > > + > > +The queue mmap space has a VM_DONTCOPY in its VMA. So the child will lose all > > +those VMAs. > > + > > +This is a reason why Uacce does not adopt the mode used in VFIO and > > +InfiniBand. Both solutions can set any user pointer for hardware sharing. > > +But they cannot support fork when the dma is in process. Or the > > +"Copy-On-Write" procedure will make the parent process lost its physical > > +pages. > > + > > + > > +Difference to the VFIO and IB framework > > +--------------------------------------- > > +The essential function of Uacce is to let the device access the user > > +address directly. There are many device drivers doing the same in the kernel. > > +And both VFIO and IB can provide similar functions in framework level. > > + > > +But Uacce has a different goal: "share address space". It is > > +not taken the request to the accelerator as an enclosure data structure. It > > +takes the accelerator as another thread of the same process. So the > > +accelerator can refer to any address used by the process. > > + > > +Both VFIO and IB are taken this as "memory sharing", not "address sharing". > > +They care more on sharing the block of memory. But if there is an address > > +stored in the block and referring to another memory region. The address may > > +not be valid. > > + > > +By adding more constraints to the VFIO and IB framework, in some sense, we may > > +achieve a similar goal. But we gave it up finally. Both VFIO and IB have extra > > +assumption which is unnecessary to Uacce. They may hurt each other if we > > +try to merge them together. > > I don't know if this particular rationale belongs here rather than a cover > letter, but some of this section can be useful to let users decide if they > need uacce or VFIO. > > For the record I'm still not entirely convinced that a new solution is > preferable to vfio-mdev. > * Existing userspace drivers such as DPDK may someday benefit from > adding SVA support to VFIO. > * Patch 2/3 does seem to duplicate a lot of VFIO code for the !SVA mode. > I'd rather we avoided !SVA support altogether at first, to make the code > simpler. > * The issue with fork should be fixed in VFIO anyway, if it's an actual > concern for userspace drivers. > > On the other hand, I do agree with the following paragraph that a lighter > solution such as uacce focusing on shared address space and queues could > mean less work for device drivers and libraries. > > It would be interesting to write a device driver prototype that implements > both vfio-mdev (with added SVA support) and uacce interfaces and compare > them. But since I'm not the one writing this or the corresponding > userspace libs, I'll stop advocating vfio-mdev next time and focus on the > implementation details :) Dear Jean, Please let me answer your question about why we build another subsystem other than use vfio-mdev. I think you might remember that we did build WarpDrive on top of vfio-mdev from the very beginning. Both RFCv1, Share Parent IOMMU mdev, and the RFCv2, Share Domain mdev, are based on mdev. We got many comments and we finally felt we could not solve all of them if we continued the mdev directory. I think the key problem here is that mdev is a virtual *device*. So, 1. As you have said, this creates more logic which is useless for accelerator. For example, it gives the user the full control of DMA and irq, it replays the dma mapping for new attach device and it create VFIO IOMMU drivers... These are necessary to simulate a raw device to a virtual machine. But it is not necessary to an accelerator. 2. You are forced to separate the resource to a device before use it. And if the user process crash, we need extra facility to put it back to the resource pool. 3. Though Alex Williamson argues that vfio is not just used for virtualisation. But it is indeed used only by virtualisation for the time being. And Jerome Glisse (also from Redhat) said he could accept some problem from a virtual machine because we can constrain the behavior of virtual machine program (such as qemu) . But he could not accept that happened in an general application. For example, if you pin the memory in a process and then fork, you may lost the physical page due to COW. We can solve the problem in uacce by letting go the shared pages in the child. So we think we should not continue the mdev direction for uacce. Even we can merge the logic together. It we become a burden for both vfio and uacce. Both of them make use of IOMMU, so of course they will have some code similar. But they go to different direction. VFIO is trying to export virtual DMA and irq to the user space. While uacce is trying to let the accelerator share address with the process itself. Many tricks can be made between the final user interface and the address translation framework. Merge them together will not be good to both of them. Hope this answer some of your question. Cheers:) > > Thanks, > Jean > > > + > > +VFIO manages resource of a hardware as a "virtual device". If a device need to > > +serve a separated application. It must isolate the resource as a separate > > +virtual device. And the life cycle of the application and virtual device are > > +unnecessary unrelated. And most concepts, such as bus, driver, probe and > > +so on, to make it as a "device" is unnecessary either. And the logic added to > > +VFIO to make address sharing do no help on "creating a virtual device". > > + > > +IB creates a "verbs" standard for sharing memory region to another remote > > +entity. Most of these verbs are to make memory region between entities to be > > +synchronized. This is not what accelerator need. Accelerator is in the same > > +memory system with the CPU. It refers to the same memory system among CPU and > > +devices. So the local memory terms/verbs are good enough for it. Extra "verbs" > > +are not necessary. And its queue (like queue pair in IB) is the communication > > +channel direct to the accelerator hardware. There is nothing about memory > > +itself. > > + > > +Further, both VFIO and IB use the "pin" (get_user_page) way to lock local > > +memory in place. This is flexible. But it can cause other problems. For > > +example, if the user process fork a child process. The COW procedure may make > > +the parent process lost its pages which are sharing with the device. These may > > +be fixed in the future. But is not going to be easy. (There is a discussion > > +about this on Linux Plumbers Conference 2018 [2]) > > + > > +So we choose to build the solution directly on top of IOMMU interface. IOMMU > > +is the essential way for device and process to share their page mapping from > > +the hardware perspective. It will be safe to create a software solution on > > +this assumption. Uacce manages the IOMMU interface for the accelerator > > +device, so the device driver can export some of the resources to the user > > +space. Uacce than can make sure the device and the process have the same > > +address space. > > + > > + > > +References > > +========== > > +.. [1] http://jpbrucker.net/sva/ > > +.. [2] https://lwn.net/Articles/774411/ > > -- > > 2.7.4 > > -- -Kenneth Lee (Hisilicon)