On Thu, Sep 21, 2023 at 01:52:24PM -0300, Jason Gunthorpe wrote: > On Thu, Sep 21, 2023 at 10:43:50AM -0600, Alex Williamson wrote: > > > > With that code in place a legacy driver in the guest has the look and > > > feel as if having a transitional device with legacy support for both its > > > control and data path flows. > > > > Why do we need to enable a "legacy" driver in the guest? The very name > > suggests there's an alternative driver that perhaps doesn't require > > this I/O BAR. Why don't we just require the non-legacy driver in the > > guest rather than increase our maintenance burden? Thanks, > > It was my reaction also. > > Apparently there is a big deployed base of people using old guest VMs > with old drivers and they do not want to update their VMs. It is the > same basic reason why qemu supports all those weird old machine types > and HW emulations. The desire is to support these old devices so that > old VMs can work unchanged. > > Jason And you are saying all these very old VMs use such a large number of legacy devices that over-counting of locked memory due to vdpa not correctly using iommufd is a problem that urgently needs to be solved otherwise the solution has no value? Another question I'm interested in is whether there's actually a performance benefit to using this as compared to just software vhost. I note there's a VM exit on each IO access, so ... perhaps? Would be nice to see some numbers. -- MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization