On Thu, Aug 04, 2022 at 09:52:47AM +0800, Jason Wang wrote:
On Thu, Aug 4, 2022 at 1:46 AM Andrey Zhadchenko
<andrey.zhadchenko@xxxxxxxxxxxxx> wrote:
Hi!
Recently I sent vhost-blk patchset and Stefano suggested to instead join
efforts on developing vdpa-blk.
I played with it a bit and looks like vdpa itself pins the whole guest
memory. Is there a way to control it or reduce pinned amount to the
device pages?
Looks like even vdpa-sim requires all memory to be pinned [1]. Pinning
this much will surely impact guest density.
It depends on the parent.
When allocating the vDPA device, the parent can clams it supports
virtual address then pinning is avoided:
/**
* __vdpa_alloc_device - allocate and initilaize a vDPA device
* This allows driver to some prepartion after device is
* initialized but before registered.
...
* @use_va: indicate whether virtual address must be used by this device
*/
The only user so far is VDUSE which is a software parent in the
userspace with a customized swiotlb for kernel drivers.
Simulator came before this feature so we stick to the pinning method,
I based vdpa-blk PoC on the simulator and didn't realize this, maybe I
should have used this.
technically we can switch to use the va mode, but it might have some
performance impact (mostly the copy_from|to_user()).
Would the cost be comparable to implementing a vhost-blk device? (IIRC
vq in vhost uses copy_from/to_user, right?)
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization