> Hmm, the cross-device buffer sharing framework I have in mind would > basically be a buffer registry. virtio-gpu would create buffers as > usual, create a identifier somehow (details to be hashed out), attach > the identifier to the dma-buf so it can be used as outlined above. Using physical addresses to identify buffers is using the guest physical address space as the buffer registry. Especially if every device should be able to operate in isolation, then each virtio protocol will have some way to allocate buffers that are accessible to the guest and host. This requires guest physical addresses, and the guest physical address of the start of the buffer can serve as the unique identifier for the buffer in both the guest and the host. Even with buffers that are only accessible to the host, I think it's reasonable to allocate guest physical addresses since the pages still exist (in the same way physical addresses for secure physical memory make sense). This approach also sidesteps the need for explicit registration. With explicit registration, either there would need to be some centralized buffer exporter device or each protocol would need to have its own export function. Using guest physical addresses means that buffers get a unique identifier during creation. For example, in the virtio-gpu protocol, buffers would get this identifier through VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING, or through VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 with impending additions to resource creation. > Also note that the guest manages the address space, so the host can't > simply allocate guest page addresses. Mapping host virtio-gpu resources > into guest address space is planned, it'll most likely use a pci memory > bar to reserve some address space. The host can map resources into that > pci bar, on guest request. > > > - virtio-gpu driver could then create a regular DMA-buf object for > > such memory, because it's just backed by pages (even though they may > > not be accessible to the guest; just like in the case of TrustZone > > memory protection on bare metal systems), > > Hmm, well, pci memory bars are *not* backed by pages. Maybe we can use > Documentation/driver-api/pci/p2pdma.rst though. With that we might be > able to lookup buffers using device and dma address, without explicitly > creating some identifier. Not investigated yet in detail. For the linux guest implementation, mapping a dma-buf doesn't necessarily require actual pages. The exporting driver's map_dma_buf function just needs to provide a sg_table with populated dma_addres fields, it doesn't actually need to populate the sg_table with pages. At the very least, there are places such as i915_gem_stolen.c and (some situations of) videobuf-dma-sg.c that take this approach. Cheers, David