Hi, I have 18 QEMU/KVM VMs configured on a local libvirtd instance. Normally virt-manager connects to the session snappily, but when I set 'access_drivers = [ "polkit" ]' in libvirtd.conf, virt-manager takes 7 seconds to connect, and causes one of my cores to be pinned at 100% usage. polkitd and libvirtd are the processes consuming CPU. This sucks, and makes me reconsider recommending using virt-manager to others. Further investigation reveals that virt-manager is triggering **596** polkit checks. In particular: - 240 from org.libvirt.api.node-device.getattr - 120 from org.libvirt.api.node-device.read - 128 from org.libvirt.api.domain.read - obviously 18 of these are probably necessary - 37 from org.libvirt.api.domain.read-secure - 18 from org.libvirt.api.domain.getattr - obviously all of these are probably necessary It seems like virt-manager is going out of its way to prefetch every possible piece of system state; cf virtinst/connection.py and virtManager/connection.py. Don't do that, I guess. Note that `virsh list --all` does not have the same performance issues, because it only does one domain.read and one domain.getattr for every domain, which is optimal. I don't think I have an alternative to using polkit. I would like to have a guest account that can only access a restricted list of VMs. AFAIK, libvirt's other authorization facilities don't allow for that sort of thing. So... maybe we could bypass connection prefetching under certain circumstances? Or is pkcheck not supposed to be this slow? FWIW: Machine is a Dell Precision T3500 (Xeon W3530) running Arch. -- Richard Tollerton <rich.tollerton@xxxxxx> _______________________________________________ virt-tools-list mailing list virt-tools-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/virt-tools-list