So, there won't be dmabuf leaking problem, as we release all the dmabuf_obj in the release ops when user space crashing. Can we just stop considering the way to fix the dmabuf life-cycle issue and try to just consider the generic way to handle buffer exposing? Does the generic way need the close ioctl? In my opinion, it's like to build up a producer-consumer way to expose the buffer: Create buffer and return its info Mdev devices -----------------------------------------------------> User space <----------------------------------------------------- (producer) Close it (consumer) Alex and Gerd, can you share your thoughts? Thanks. BR, Tina > -----Original Message----- > From: Zhang, Tina > Sent: Friday, September 29, 2017 7:43 AM > To: 'Gerd Hoffmann' <kraxel@xxxxxxxxxx>; zhenyuw@xxxxxxxxxxxxxxx; Wang, > Zhi A <zhi.a.wang@xxxxxxxxx>; Tian, Kevin <kevin.tian@xxxxxxxxx> > Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>; intel-gfx@xxxxxxxxxxxxxxxxxxxxx; > intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx; Alex Williamson > <alex.williamson@xxxxxxxxxx>; Lv, Zhiyuan <zhiyuan.lv@xxxxxxxxx> > Subject: RE: [PATCH v14 5/7] vfio: ABI for mdev display dma-buf operation > > Thanks for the patch. Actually, I did the same thing in my local repo and also, I > have a patch for the local Qemu repo to test it. I will send them out later. > > The reason why I want to propose the close IOCTL is because that the current > lock (fb_obj_list_lock), cannot sync the intel_vgpu_fb_info releasing and > reusing. > You see, the intel_vgpu_fb_info reusing and releasing are in different threads. > There is a case that intel_vgpu_find_dmabuf can return a intel_vgpu_fb_obj, > while the intel_vgpu_fb_obj is on the way to be released. That's the problem. > > The invoker of the close IOCTL is only Qemu. So, if the Qemu crashes, the whole > vGPU's resource is going to be released. We can handle our dmabuf_obj to be > released there. > > Thanks. > > BR, > Tina > > > -----Original Message----- > > From: intel-gvt-dev > > [mailto:intel-gvt-dev-bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of Gerd > > Hoffmann > > Sent: Wednesday, September 27, 2017 6:11 PM > > To: Zhang, Tina <tina.zhang@xxxxxxxxx>; zhenyuw@xxxxxxxxxxxxxxx; Wang, > > Zhi A <zhi.a.wang@xxxxxxxxx>; Tian, Kevin <kevin.tian@xxxxxxxxx> > > Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>; > > intel-gfx@xxxxxxxxxxxxxxxxxxxxx; intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx; > > Alex Williamson <alex.williamson@xxxxxxxxxx>; Lv, Zhiyuan > > <zhiyuan.lv@xxxxxxxxx> > > Subject: Re: [PATCH v14 5/7] vfio: ABI for mdev display dma-buf > > operation > > > > Hi, > > > > > So, there is a problem about the releasing cached dmabuf_obj. We > > > cannot rely on the drm_i915_gem_object_ops.release() to release the > > > cached dmabuf_obj, as this release operation is running in another > > > thread, which leads to a racing condition and tricky to be solved > > > without touching other modules. > > > > PLANE_INFO just creates a intel_vgpu_dmabuf_obj. > > > > GET_DMABUF creates a fresh proxy gem object and dmabuf. > > > > proxy gem object references intel_vgpu_dmabuf_obj but not the other > > way around. Then you can simply refcount intel_vgpu_dmabuf_obj and be > > done with it. > > > > https://www.kraxel.org/cgit/linux/commit/?h=gvt-dmabuf- > v14&id=350a0e83 > > 4 > > 971e6f53d7235d8b6167bed4dccf074 > > > > Note: Patch renamed intel_vgpu_dmabuf_obj to intel_vgpu_fb_obj, > > because it doesn't refer to dmabufs any more. It basically carries > > the guest plane/framebuffer information and the ID associated with it. > > > > > So, in order to solve that kind of problem, I’d like to add one more > > > ioctl, which is used for user mode to close the dmabuf_obj. > > > > Depending on userspace notifying the kernel for that kind of cleanups > > is a bad idea. What happens in case userspace crashes? Do you leak dmabufs > then? > > > > cheers, > > Gerd > > > > _______________________________________________ > > intel-gvt-dev mailing list > > intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx > > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx