RE: [RFC v3 2/3] virtio: Introduce Vdmabuf driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gerd,

> -----Original Message-----
> From: Gerd Hoffmann <kraxel@xxxxxxxxxx>
> Sent: Tuesday, February 09, 2021 12:45 AM
> To: Kasireddy, Vivek <vivek.kasireddy@xxxxxxxxx>
> Cc: Daniel Vetter <daniel@xxxxxxxx>; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; dri-
> devel@xxxxxxxxxxxxxxxxxxxxx; Vetter, Daniel <daniel.vetter@xxxxxxxxx>;
> daniel.vetter@xxxxxxxx; Kim, Dongwon <dongwon.kim@xxxxxxxxx>;
> sumit.semwal@xxxxxxxxxx; christian.koenig@xxxxxxx; linux-media@xxxxxxxxxxxxxxx
> Subject: Re: [RFC v3 2/3] virtio: Introduce Vdmabuf driver
> 
>   Hi,
> 
> > > > > Nack, this doesn't work on dma-buf. And it'll blow up at runtime
> > > > > when you enable the very recently merged CONFIG_DMABUF_DEBUG (would
> > > > > be good to test with that, just to make sure).
> > [Kasireddy, Vivek] Although, I have not tested it yet but it looks like this will
> > throw a wrench in our solution as we use sg_next to iterate over all the struct page *
> > and get their PFNs. I wonder if there is any other clean way to get the PFNs of all
> > the pages associated with a dmabuf.
> 
> Well, there is no guarantee that dma-buf backing storage actually has
> struct page ...
[Kasireddy, Vivek] What if I do mmap() on the fd followed by mlock() or mmap()
followed by get_user_pages()? If it still fails, would ioremapping the device memory
and poking at the backing storage be an option? Or, if I bind the passthrough'd GPU device
to vfio-pci and tap into the memory region associated with the device memory, can it be
made to work? 

And, I noticed that for PFNs that do not have valid struct page associated with it, KVM
does a memremap() to access/map them. Is this an option?

> 
> > [Kasireddy, Vivek] To exclude such cases, would it not be OK to limit the scope
> > of this solution (Vdmabuf) to make it clear that the dma-buf has to live in Guest RAM?
> > Or, are there any ways to pin the dma-buf pages in Guest RAM to make this
> > solution work?
> 
> At that point it becomes (i915) driver-specific.  If you go that route
> it doesn't look that useful to use dma-bufs in the first place ...
[Kasireddy, Vivek] I prefer not to make this driver specific if possible.

> 
> > IIUC, Virtio GPU is used to present a virtual GPU to the Guest and all the rendering
> > commands are captured and forwarded to the Host GPU via Virtio.
> 
> You don't have to use the rendering pipeline.  You can let the i915 gpu
> render into a dma-buf shared with virtio-gpu, then use virtio-gpu only for
> buffer sharing with the host.
[Kasireddy, Vivek] Is this the most viable path forward? I am not sure how complex or 
feasible it would be but I'll look into it.
Also, not using the rendering capabilities of virtio-gpu and turning it into a sharing only
device means there would be a giant mode switch with a lot of if() conditions sprinkled
across. Are you OK with that?

Thanks,
Vivek
> 
> take care,
>   Gerd

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux