On 5/3/2019 3:29 PM, Christoph Hellwig wrote:
On Thu, May 02, 2019 at 02:47:57PM +0300, Maxim Levitsky wrote:
If the mdev device driver also sets the
NVME_F_MDEV_DMA_SUPPORTED, the mdev core will
dma map all the guest memory into the nvme device,
so that nvme device driver can use dma addresses as passed
from the mdev core driver
We really need a proper block layer interface for that so that
uring or the nvme target can use pre-mapping as well.
I think we can also find a way to use nvme-mdev for the target offload
p2p feature.
Don't see a big difference of taking NVMe queue and namespace/partition
to guest OS or to P2P since IO is issued by external entity and pooled
outside the pci driver.
thoughts ?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-nvme