On Mon, Aug 06, 2018 at 09:49:40AM -0600, Alex Williamson wrote: > On Mon, 6 Aug 2018 09:40:04 +0800 > Kenneth Lee <liguozhu@xxxxxxxxxxxxx> wrote: > > > > 1. It supports thousands of processes. Take zip accelerator as an example, any > > application need data compression/decompression will need to interact with the > > accelerator. To support that, you have to create tens of thousands of mdev for > > their usage. I don't think it is a good idea to have so many devices in the > > system. > > Each mdev is a device, regardless of whether there are hardware > resources committed to the device, so I don't understand this argument. > > > 2. The application does not want to own the mdev for long. It just need an > > access point for the hardware service. If it has to interact with an management > > agent for allocation and release, this makes the problem complex. > > I don't see how the length of the usage plays a role here either. Are > you concerned that the time it takes to create and remove an mdev is > significant compared to the usage time? Userspace is certainly welcome > to create a pool of devices, but why should it be the kernel's > responsibility to dynamically assign resources to an mdev? What's the > usage model when resources are unavailable? It seems there's > complexity in either case, but it's generally userspace's responsibility > to impose a policy. > Can vfio dev's created representing an mdev be shared between several processes? It doesn't need to be exclusive. The path to hardware is established by the processes binding to SVM and IOMMU ensuring that the PASID is plummed properly. One can think the same hardware is shared between several processes, hardware knows the isolation is via the PASID. For these cases it isn't required to create a dev per process. Cheers, Ashok