On Tue, Oct 26, 2021 at 04:49:55PM +0200, Michal Prívozník wrote: > On 10/26/21 4:39 PM, Daniel P. Berrangé wrote: > > On Tue, Oct 26, 2021 at 12:17:06PM +0200, Michal Privoznik wrote: > >> In some cases the worker func running inside the pool may rely on > >> virIdentity. While worker func could check for identity and set > >> one it is not optimal - it may not have access to the identity of > >> the thread creating the pool and thus would have to call > >> virIdentityGetSystem(). Allow passing identity when creating the > >> pool. > > > > I wonder if we should have an identity passed via virThreadPoolSendJob, > > so whatever queues the job can preserve its identity ? > > I thought about this too, but the threat that's doing that doesn't have > an identity set either: > > #0 virThreadPoolSendJob (pool=0x7f2ff010a370, priority=0, jobData=0x7f2fdc0b1f10) at ../src/util/virthreadpool.c:390 > #1 0x00007f30301f5240 in qemuProcessEventSubmit (driver=0x7f2ff0022370, event=0x7f3012ffc880) at ../src/qemu/qemu_process.c:300 > #2 0x00007f30301f8d13 in qemuProcessHandleSerialChanged (mon=0x7f2ff00295d0, vm=0x7f2ff0037830, devAlias=0x7f2fdc12f9e0 "channel0", connected=false, opaque=0x7f2ff0022370) at ../src/qemu/qemu_process.c:1540 > #3 0x00007f30301c37f2 in qemuMonitorEmitSerialChange (mon=0x7f2ff00295d0, devAlias=0x7f2fdc12f9e0 "channel0", connected=false) at ../src/qemu/qemu_monitor.c:1373 > #4 0x00007f30301d48c8 in qemuMonitorJSONHandleSerialChange (mon=0x7f2ff00295d0, data=0x7f2fdc0b2b40) at ../src/qemu/qemu_monitor_json.c:1145 > #5 0x00007f30301d1725 in qemuMonitorJSONIOProcessEvent (mon=0x7f2ff00295d0, obj=0x7f2fdc0b2770) at ../src/qemu/qemu_monitor_json.c:208 > #6 0x00007f30301d1933 in qemuMonitorJSONIOProcessLine (mon=0x7f2ff00295d0, line=0x7f2fdc0b4cd0 "{\"timestamp\": {\"seconds\": 1635259527, \"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": {\"open\": false, \"id\": \"channel0\"}}", msg=0x0) at ../src/qemu/qemu_monitor_json.c:236 > #7 0x00007f30301d1bcc in qemuMonitorJSONIOProcess (mon=0x7f2ff00295d0, data=0x7f2fdc12f590 "{\"timestamp\": {\"seconds\": 1635259527, \"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": {\"open\": false, \"id\": \"channel0\"}}\r\n", len=135, msg=0x0) at ../src/qemu/qemu_monitor_json.c:274 > #8 0x00007f30301c0400 in qemuMonitorIOProcess (mon=0x7f2ff00295d0) at ../src/qemu/qemu_monitor.c:347 > #9 0x00007f30301c0c80 in qemuMonitorIO (socket=0x7f2ff0017280, cond=0, opaque=0x7f2ff00295d0) at ../src/qemu/qemu_monitor.c:573 > #10 0x00007f3036ba8c77 in socket_source_dispatch () at /usr/lib64/libgio-2.0.so.0 > #11 0x00007f3036d8b9d0 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 > #12 0x00007f3036d8bd78 in g_main_context_iterate.constprop () at /usr/lib64/libglib-2.0.so.0 > #13 0x00007f3036d8c06b in g_main_loop_run () at /usr/lib64/libglib-2.0.so.0 > #14 0x00007f3036f81d7b in virEventThreadWorker (opaque=0x7f2ff00b10c0) at ../src/util/vireventthread.c:124 > #15 0x00007f3036db52bd in g_thread_proxy () at /usr/lib64/libglib-2.0.so.0 > #16 0x00007f3036235e3e in start_thread () at /lib64/libpthread.so.0 > #17 0x00007f30369ffd6f in clone () at /lib64/libc.so.6 > > virThreadPoolSendJob 19 $ call virIdentityGetCurrent() > $1 = 0x0 Hmm, yes, because that's from the event loop thread. This actually makes me wonder if we actually need the worker pool at all here. The worker pool is global to the driver, so events for all VMs are being processed in the pool and so will contend for resource. This made sense when all monitor I/O was procssed by the main event loop thread, but now we have a per-VM event loop thread. Seems like we could just process events directly in the per-VM thread potentially, unless something they do synchronously needs to do more monitor I/O None the less, that's too big of a change todo in a hurry, so lets go with your patches here. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|