Re: [BUG] duplicate suspend/resume lifecycle events with QEMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Am 13.06.2017 um 06:13 schrieb Philipp Hahn:
> I'm using the libvirt event mechanism and noticed, that several events
> are reported twice:
...
>> $ virsh  event --loop --event lifecycle
>> event 'lifecycle' for domain installer: Started Booted
>> event 'lifecycle' for domain installer: Suspended Paused
>> event 'lifecycle' for domain installer: Resumed Unpaused
>> event 'lifecycle' for domain installer: Resumed Unpaused
>> event 'lifecycle' for domain installer: Suspended Paused
>> event 'lifecycle' for domain installer: Suspended Paused
>> event 'lifecycle' for domain installer: Stopped Destroyed
...
> Looking at src/qemu/qemu_monitor_json.c:qemuMonitorJSONIOProcessEvent:

That was not the cause. Using gdb shows the real duplicator:

Breakpoint 9, virDomainEventLifecycleNew (id=10, name=0x7fffd825e700
"win8", uuid=0x7fffd825e128 "o8r\204\344\372F\263\214oGF\325@=9",
type=4, detail=0) at ../../../src/conf/domain_event.c:657
657     }
$13 = {parent = {parent = {parent = {u = {dummy_align1 = 7700611138,
dummy_align2 = 0x1cafe0042, s = {magic = 3405643842, refs = 1}}, klass =
0x5555558304e0}, eventID = 0, meta = {id = 10, name = 0x5555558304a0 "win8",
        uuid = "o8r\204\344\372F\263\214oGF\325@=9", key =
0x555555831270 "6f387284-e4fa-46b3-8c6f-4746d5403d39"}, remoteID = -1,
dispatch = 0x7ffff77d85c0 <virDomainEventDispatchDefaultFunc>}, dummy =
false}, type = 4, detail = 0}
> (gdb) bt
> #0  virDomainEventLifecycleNew (id=10, name=0x7fffd825e700 "win8", uuid=0x7fffd825e128 "o8r\204\344\372F\263\214oGF\325@=9", type=4, detail=0) at ../../../src/conf/domain_event.c:657
> #1  0x00007fffe52fefef in qemuProcessHandleResume (mon=0x0, vm=0x7fffd825ec80, opaque=0x7fffd8101b90) at ../../../src/qemu/qemu_process.c:773
> #2  0x00007fffe531c9c2 in qemuMonitorEmitResume (mon=0x7fffd0000ea0) at ../../../src/qemu/qemu_monitor.c:1330
> #3  0x00007fffe532fd0b in qemuMonitorJSONIOProcessEvent (obj=<optimized out>, mon=<optimized out>) at ../../../src/qemu/qemu_monitor_json.c:178
> #4  qemuMonitorJSONIOProcessLine (mon=0x0, line=0x7fffe55d50e0 <eventHandlers+224> "}\242\071\345\377\177", msg=0xe) at ../../../src/qemu/qemu_monitor_json.c:207
> #5  0x00007fffe532fedb in qemuMonitorJSONIOProcess (mon=0x0, data=0x7ffff774f9b0 <virLogOnce> "H\203\354\bH\215=\245\315G", len=119, msg=0x0) at ../../../src/qemu/qemu_monitor_json.c:249
> #6  0x00007fffe531b293 in qemuMonitorIOProcess (mon=<optimized out>) at ../../../src/qemu/qemu_monitor.c:432
> #7  qemuMonitorIO (watch=18, fd=-134433088, events=1, opaque=0x7fffd0000ea0) at ../../../src/qemu/qemu_monitor.c:686
> #8  0x00007ffff7735446 in virEventPollDispatchHandles (fds=<optimized out>, nfds=<optimized out>) at ../../../src/util/vireventpoll.c:508
> #9  virEventPollRunOnce () at ../../../src/util/vireventpoll.c:657
> #10 0x00007ffff7733b71 in virEventRunDefaultImpl () at ../../../src/util/virevent.c:314
> #11 0x00007ffff788aced in virNetDaemonRun (dmn=0x555555807d60) at ../../../src/rpc/virnetdaemon.c:818
> #12 0x000055555556e486 in main (argc=-10000, argv=0x0) at ../../../daemon/libvirtd.c:1547
> (gdb) c

This is the QEMU monitor event handler

> Continuing.
> [Switching to Thread 0x7fffeee05700 (LWP 11193)]
> 
> Breakpoint 9, virDomainEventLifecycleNew (id=10, name=0x7fffd825e700 "win8", uuid=0x7fffd825e128 "o8r\204\344\372F\263\214oGF\325@=9", type=4, detail=0) at ../../../src/conf/domain_event.c:657
> 657     }
> $14 = {parent = {parent = {parent = {u = {dummy_align1 = 7700611138, dummy_align2 = 0x1cafe0042, s = {magic = 3405643842, refs = 1}}, klass = 0x5555558304e0}, eventID = 0, meta = {id = 10, name = 0x7fffbc000a30 "win8", 
>         uuid = "o8r\204\344\372F\263\214oGF\325@=9", key = 0x7fffbc000b70 "6f387284-e4fa-46b3-8c6f-4746d5403d39"}, remoteID = -1, dispatch = 0x7ffff77d85c0 <virDomainEventDispatchDefaultFunc>}, dummy = false}, type = 4, detail = 0}
> (gdb) bt
> #0  virDomainEventLifecycleNew (id=10, name=0x7fffd825e700 "win8", uuid=0x7fffd825e128 "o8r\204\344\372F\263\214oGF\325@=9", type=4, detail=0) at ../../../src/conf/domain_event.c:657
> #1  0x00007fffe53579dc in qemuDomainResume (dom=0x7fffbc000930) at ../../../src/qemu/qemu_driver.c:1965
> #2  0x00007ffff7818acf in virDomainResume (domain=0x7fffbc000930) at ../../../src/libvirt-domain.c:679
> #3  0x0000555555595a3e in remoteDispatchDomainResume (server=<optimized out>, msg=<optimized out>, args=<optimized out>, rerr=<optimized out>, client=<optimized out>) at ../../../daemon/remote_dispatch.h:9132
> #4  remoteDispatchDomainResumeHelper (server=0x555555806d30, client=0x555555832970, msg=0x555555837910, rerr=0x7fffeee04cc0, args=0x7fffbc000930, ret=0x7fffbc000fc0) at ../../../daemon/remote_dispatch.h:9108
> #5  0x00007ffff7890939 in virNetServerProgramDispatchCall (msg=<optimized out>, client=<optimized out>, server=<optimized out>, prog=<optimized out>) at ../../../src/rpc/virnetserverprogram.c:437
> #6  virNetServerProgramDispatch (prog=0x55555581be70, server=0x7ffff774f9b0 <virLogOnce>, server@entry=0x555555806d30, client=0x555555832970, msg=0x555555837910) at ../../../src/rpc/virnetserverprogram.c:307
> #7  0x00005555555a7378 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x555555806d30) at ../../../src/rpc/virnetserver.c:148
> #8  virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x555555806d30) at ../../../src/rpc/virnetserver.c:169
> #9  0x00007ffff7782c98 in virThreadPoolWorker (opaque=opaque@entry=0x5555557faa70) at ../../../src/util/virthreadpool.c:167
> #10 0x00007ffff7782068 in virThreadHelper (data=<optimized out>) at ../../../src/util/virthread.c:206
> #11 0x00007ffff45b4064 in start_thread (arg=0x7fffeee05700) at pthread_create.c:309
> #12 0x00007ffff42e962d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

this is the code from src/qemu/qemu_driver.c:qemuDomainResume()

>  1903 static int qemuDomainResume(virDomainPtr dom)
>  1904 {
...
>  1942         if (qemuProcessStartCPUs(driver, vm, dom->conn,
>  1943                                  VIR_DOMAIN_RUNNING_UNPAUSED,
>  1944                                  QEMU_ASYNC_JOB_NONE) < 0) {
>  1945             if (virGetLastError() == NULL)
>  1946                 virReportError(VIR_ERR_OPERATION_FAILED,
>  1947                                "%s", _("resume operation failed"));
>  1948             goto endjob;
>  1949         }                      
>  1950         event = virDomainEventLifecycleNewFromObj(vm,
>  1951                                          VIR_DOMAIN_EVENT_RESUMED,
>  1952                                          VIR_DOMAIN_EVENT_RESUMED_UNPAUSED);
>  1953     }
...
>  1966  }

My interpretation is this:
This is the legacy code from the time libvirt was using HMP to talk to
QEMU and did NOT get events back than.
Nowadays QEMU emits an event itself over QMP and libvirt still
duplicates that.

Can we get rid of them or at least disable those Lifecycle events when
QMP is used, so we get the real state from QEMU and not some desired
state from libvirt?

Philipp

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux