Re: The issue about adding multipath device's targets into qemu-pr-helper's namespace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-07-17 07:05, Michal Prívozník wrote:
On 7/15/20 5:35 AM, Lin Ma wrote:
On 2020-07-14 14:38, Michal Privoznik wrote:
On 7/14/20 3:41 PM, Lin Ma wrote:
Hi all,

I have a namespace question about passthrough disk(multipath device). In case of enabling namespace and cgroups in qemu.conf, The target(s) of the multipath device won't be added into qemu-pr-helper's namespace under certain situation, It causes the PERSISTENT RESERVE command failure in guest.


Yeah, this is expected. I mean, the failure is expected if not all
devices are added to the namespace.

While user starts a vm,
To build namespace, The qemuDomainSetupDisk() will be invoked via threadA(this
thread id will be the qemu's pid),
To build cgroup, The qemuSetupImagePathCgroup() will be invoked via threadB.

Both of the functions invoke the virDevMapperGetTargets() trying to parse a multipath device to target paths string, Then fill the targetPaths[].

The issue I experienced is:
After libvirtd started, Everything works well for the first booted vm which has
the passthrough multipath device.
But If I shut it down & start it again, OR keep it running & start another vm which has other passthrough multipath device, Then the target(s) of the fresh started vm won't be added into the related qemu-pr-helper's namespace and it causes PERSISTENT RESERVE command failure in the corresponding guest. I digged into code, In this situation, The targetPaths[] in qemuDomainSetupDisk() won't be filled, it keeps NULL after virDevMapperGetTargets() returns. The virDevMapperGetTargets doesn't fill targetPaths[] because the dm_task_run()
of libdevmapper returns 0 with errno 9(Bad file descriptor).
So far, I don't understand why the dm_task_run() return 0 in this situation. BTW, The virDevMapperGetTargets() can always successfully fill the targetPaths[]
in qemuSetupImagePathCgroup().

What is your libvirt version? I've merged a fix for something similar
not a long ago: v6.5.0-rc1~190

Can you please check v6.5.0?

The libvirt version I used in the tests is git master.
The sure thing is that the multipath devices I used are valid and functional.

In the test A or the test B, Everytime we restart libvirtd, The issue won't happen for
the first booted vm.

In other words, The target of multipath device can be parsed & added to qemu-pr-helper's
namespace for the first booted vm.

As long as libvirtd once invoked qemuSetupImagePathCgroup()->virDevMapperGetTargetsImpl()
to successfully parse any valid multipath device,
From then on, The qemuDomainSetupDisk()->virDevMapperGetTargetsImpl()->dm_task_run()
returns 0 with errno=9 when parsing any valid multipath device.

At this point I'm wondering whether we are using the API correctly.
If you disable namespaces, then CGroups still need to know the targets
and will query dm. Does that work?
Only disable namespaces or only disable CGroups, The process of querying dm deps works well.

The aim is to do what 'dmsetup deps' does. Once you reach the error
state - does that command work?
'dmsetup deps' works well while I hit the error state.


errno=9 is EBADF so maybe this is a dm issue after all?
Emm...So far I lean towards to an issue in libvirt code, rather than in dm.


BTW:

Maybe we will need to merge the following patch to log error messages from dm?

https://gitlab.com/MichalPrivoznik/libvirt/-/commit/ed3a7ae5d0cd3f492fe219dbd6b4820dde55841f
After merged this commit, I pasted part of log outputs here:

* libvirtd log:
2020-07-20 08:19:27.970+0000: 4621: info : libvirt version: 6.6.0
2020-07-20 08:19:27.970+0000: 4621: info : hostname: tw
......
2020-07-20 08:19:27.970+0000: 4636: debug : virThreadJobSetWorker:75 : Thread 4636 is running worker prio-rpc-worker 2020-07-20 08:19:27.971+0000: 4631: debug : virThreadJobSetWorker:75 : Thread 4631 is running worker rpc-worker 2020-07-20 08:19:27.971+0000: 4637: debug : virThreadJobSetWorker:75 : Thread 4637 is running worker prio-rpc-worker 2020-07-20 08:19:27.971+0000: 4635: debug : virThreadJobSetWorker:75 : Thread 4635 is running worker rpc-worker 2020-07-20 08:19:27.971+0000: 4634: debug : virThreadJobSetWorker:75 : Thread 4634 is running worker rpc-worker 2020-07-20 08:19:27.971+0000: 4639: debug : virThreadJobSetWorker:75 : Thread 4639 is running worker prio-rpc-worker 2020-07-20 08:19:27.971+0000: 4638: debug : virThreadJobSetWorker:75 : Thread 4638 is running worker prio-rpc-worker
......
2020-07-20 08:19:27.971+0000: 4632: debug : virThreadJobSetWorker:75 : Thread 4632 is running worker rpc-worker
......
2020-07-20 08:19:33.293+0000: 4632: debug : qemuSetupImagePathCgroup:73 : Allow path /opt/vms/guest01/disk0.qcow2, perms: rw 2020-07-20 08:19:33.293+0000: 4632: debug : qemuSetupImagePathCgroup:73 : Allow path /dev/mapper/control, perms: rw 2020-07-20 08:19:33.293+0000: 4632: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d1\x2dguest01.scope/devices.allow' to 'c 10:236 rw' 2020-07-20 08:19:33.293+0000: 4632: debug : virFileClose:134 : Closed fd 28 2020-07-20 08:19:33.293+0000: 4632: debug : qemuSetupImagePathCgroup:73 : Allow path /dev/mapper/vm1-data, perms: rw 2020-07-20 08:19:33.293+0000: 4632: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d1\x2dguest01.scope/devices.allow' to 'b 254:11 rw' 2020-07-20 08:19:33.293+0000: 4632: debug : virFileClose:134 : Closed fd 28 2020-07-20 08:19:33.293+0000: 4632: debug : virDevMapperDummyLogger:57 : dm log: level=7 file=ioctl/libdm-iface.c:1863 errno=4 : dm version [ opencount flush ] [16384] (*1) 2020-07-20 08:19:33.293+0000: 4632: debug : virDevMapperDummyLogger:57 : dm log: level=7 file=ioctl/libdm-iface.c:1863 errno=4 : dm deps vm1-data [ noopencount flush ] [16384] (*1) 2020-07-20 08:19:33.293+0000: 4632: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d1\x2dguest01.scope/devices.allow' to 'b 8:48 rw' 2020-07-20 08:19:33.293+0000: 4632: debug : virFileClose:134 : Closed fd 30 2020-07-20 08:19:33.293+0000: 4632: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d1\x2dguest01.scope/devices.allow' to 'c 136:* rw' 2020-07-20 08:19:33.293+0000: 4632: debug : virFileClose:134 : Closed fd 30
......
2020-07-20 08:19:33.293+0000: 4632: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d1\x2dguest01.scope/devices.allow' to 'c 10:232 rw' 2020-07-20 08:19:33.293+0000: 4632: debug : virFileClose:134 : Closed fd 30 2020-07-20 08:19:33.293+0000: 4632: debug : qemuSetupRNGCgroup:659 : Setting Cgroup ACL for RNG device
......
......
2020-07-20 08:19:58.825+0000: 4635: debug : qemuSetupImagePathCgroup:73 : Allow path /opt/vms/guest02/disk0.qcow2, perms: rw 2020-07-20 08:19:58.825+0000: 4635: debug : qemuSetupImagePathCgroup:73 : Allow path /dev/mapper/control, perms: rw 2020-07-20 08:19:58.825+0000: 4635: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d2\x2dguest02.scope/devices.allow' to 'c 10:236 rw' 2020-07-20 08:19:58.825+0000: 4635: debug : virFileClose:134 : Closed fd 32 2020-07-20 08:19:58.825+0000: 4635: debug : qemuSetupImagePathCgroup:73 : Allow path /dev/mapper/vm2-data, perms: rw 2020-07-20 08:19:58.825+0000: 4635: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d2\x2dguest02.scope/devices.allow' to 'b 254:8 rw' 2020-07-20 08:19:58.825+0000: 4635: debug : virFileClose:134 : Closed fd 32 2020-07-20 08:19:58.825+0000: 4635: debug : virDevMapperDummyLogger:57 : dm log: level=7 file=ioctl/libdm-iface.c:1863 errno=4 : dm deps vm2-data [ noopencount flush ] [16384] (*1) 2020-07-20 08:19:58.825+0000: 4635: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d2\x2dguest02.scope/devices.allow' to 'b 8:64 rw' 2020-07-20 08:19:58.825+0000: 4635: debug : virFileClose:134 : Closed fd 32 2020-07-20 08:19:58.825+0000: 4635: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d2\x2dguest02.scope/devices.allow' to 'c 136:* rw' 2020-07-20 08:19:58.825+0000: 4635: debug : virFileClose:134 : Closed fd 32
......
2020-07-20 08:19:58.825+0000: 4635: debug : virCgroupSetValueRaw:454 : Set value '/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d2\x2dguest02.scope/devices.allow' to 'c 10:232 rw' 2020-07-20 08:19:58.825+0000: 4635: debug : virFileClose:134 : Closed fd 32 2020-07-20 08:19:58.825+0000: 4635: debug : qemuSetupRNGCgroup:659 : Setting Cgroup ACL for RNG device
......


* /var/log/libvirt/qemu/guest01.log:
qemu-system-x86_64 pid: 4752
qemu-pr-helper pid: 4762

2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainSetupAllDisks:10180 : Setting up disks 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/1-guest01.dev/mapper/vm1-data 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev/mapper mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/mapper/vm1-data 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/1-guest01.dev/mapper/../dm-11 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev/mapper/.. mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/mapper/../dm-11 2020-07-20 08:19:33.251+0000: 4752: debug : virDevMapperDummyLogger:57 : dm log: level=7 file=ioctl/libdm-iface.c:1863 errno=4 : dm version [ opencount flush ] [16384] (*1) 2020-07-20 08:19:33.251+0000: 4752: debug : virDevMapperDummyLogger:57 : dm log: level=7 file=ioctl/libdm-iface.c:1863 errno=4 : dm deps vm1-data [ noopencount flush ] [16384] (*1) 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/1-guest01.dev/block/8:48 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev/block mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/block/8:48 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/1-guest01.dev/block/../sdd 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev/block/.. mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/block/../sdd 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/1-guest01.dev/mapper/control 2020-07-20 08:19:33.251+0000: 4752: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/1-guest01.dev/mapper mode=0777 2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/mapper/control 2020-07-20 08:19:33.251+0000: 4752: debug : qemuDomainSetupAllDisks:10189 : Setup all disks


* /var/log/libvirt/qemu/guest02.log:
qemu-system-x86_64 pid: 4847
qemu-pr-helper pid: 4873

2020-07-20 08:19:58.768+0000: 4847: debug : qemuDomainSetupAllDisks:10180 : Setting up disks 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/2-guest02.dev/mapper/vm2-data 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/2-guest02.dev/mapper mode=0777 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/2-guest02.dev mode=0777 2020-07-20 08:19:58.768+0000: 4847: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/mapper/vm2-data 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/2-guest02.dev/mapper/../dm-8 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/2-guest02.dev/mapper/.. mode=0777 2020-07-20 08:19:58.768+0000: 4847: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/mapper/../dm-8 2020-07-20 08:19:58.768+0000: 4847: debug : virDevMapperDummyLogger:57 : dm log: level=7 file=ioctl/libdm-iface.c:1863 errno=4 : dm deps vm2-data [ noopencount flush ] [16384] (*1) 2020-07-20 08:19:58.768+0000: 4847: debug : virDevMapperDummyLogger:57 : dm log: level=3 file=ioctl/libdm-iface.c:1913 errno=-1 : device-mapper: deps ioctl on vm2-data failed: Bad file descriptor 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakeParentPath:3100 : path=/var/run/libvirt/qemu/2-guest02.dev/mapper/control 2020-07-20 08:19:58.768+0000: 4847: debug : virFileMakePathHelper:3037 : path=/var/run/libvirt/qemu/2-guest02.dev/mapper mode=0777 2020-07-20 08:19:58.768+0000: 4847: debug : qemuDomainCreateDeviceRecursive:9893 : Creating dev /dev/mapper/control 2020-07-20 08:19:58.768+0000: 4847: debug : qemuDomainSetupAllDisks:10189 : Setup all disks

Thanks,
Lin





[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux