On 7/16/19 4:37 PM, Peter Krempa wrote:
On Thu, Jul 11, 2019 at 17:54:11 +0200, Michal Privoznik wrote:
If a domain has an NVMe disk configured, then we need to create
/dev/vfio/* paths in domain's namespace so that qemu can open
them.
Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx>
---
src/qemu/qemu_domain.c | 35 ++++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 949bbace88..cd3205a588 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -11831,7 +11831,8 @@ qemuDomainGetHostdevPath(virDomainDefPtr def,
perm = VIR_CGROUP_DEVICE_RW;
if (teardown) {
- if (!virDomainDefHasVFIOHostdev(def))
+ if (!virDomainDefHasVFIOHostdev(def) &&
+ !virDomainDefHasNVMeDisk(def))
As said previously I don't like this construct and also this hunk feels
really that it does not belong to this patch.
The thing is that NVMe disks are both hostdevs and disks. So whenever we
deal with /dev/vfio/* we have to consider both.
One solution that comes to my mind is to take /dev/vfio/vfio completely
out of the picture on qemuDomainGetHostdevPath() and
qemuDomainGetNVMeDiskIOMMUGroupPaths() levels, have them return a single
path that device is associated with (/dev/vfio/N) and let caller do
checks then if /dev/vfio/vfio must also be included in whatever it is
they want to do.
Michal
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list