On Thu, Aug 04, 2022 at 09:52:47AM +0800, Jason Wang wrote: > On Thu, Aug 4, 2022 at 1:46 AM Andrey Zhadchenko > <andrey.zhadchenko@xxxxxxxxxxxxx> wrote: > > > > Hi! > > > > Recently I sent vhost-blk patchset and Stefano suggested to instead join > > efforts on developing vdpa-blk. > > I played with it a bit and looks like vdpa itself pins the whole guest > > memory. Is there a way to control it or reduce pinned amount to the > > device pages? > > Looks like even vdpa-sim requires all memory to be pinned [1]. Pinning > > this much will surely impact guest density. > > It depends on the parent. > > When allocating the vDPA device, the parent can clams it supports > virtual address then pinning is avoided: > > /** > * __vdpa_alloc_device - allocate and initilaize a vDPA device > * This allows driver to some prepartion after device is > * initialized but before registered. > ... > * @use_va: indicate whether virtual address must be used by this device > */ > > The only user so far is VDUSE which is a software parent in the > userspace with a customized swiotlb for kernel drivers. > > Simulator came before this feature so we stick to the pinning method, > technically we can switch to use the va mode, but it might have some > performance impact (mostly the copy_from|to_user()). > > This option might be useful for hardware parent if PRI or device page > fault is supported in the future. > > Thanks Well VDUSE has this funky bounce buffer design. It works but is costly performance wise. > > > > Kind regards, > > Andrey > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1868535 > > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization