Effect of nice value on idle vcpu threads consumption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
	I have been experimenting with renicing vcpu threads and found some
oddity. I was expecting a idle vcpu thread to consume close to 0% cpu resource
irrespective of its nice value. That was true when nice value was 0 for vcpu
threads. However altering nice value of (idle) vcpu threads is causing its cpu 
consumption to shoot up. Does anyone have a quick answer to this behavior?

More details.

	Machine : x3650-M2 w/ 2 Quad-core CPUs (Intel Xeon X5570), HT enabled
	Host    : RHEL 6 distro w/ 2.6.38-rc5 kernel
	Single Guest : w/ 4vcpus (all vcpus pinned to physical cpu 0), 1GB mem
			Sles11 distro w/ 2.6.37 kernel

Single guest is booted and kept idle.


When all vcpu threads are at nice 0, here's the consumption (close to 0 for all
vcpu threads).


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+   P COMMAND                                                                   
 5642 qemu      20   0 1567m 381m 3048 S  1.2  1.2   0:02.56  0 qemu-kvm
 5640 qemu      20   0 1567m 381m 3048 S  0.8  1.2   0:12.74  0 qemu-kvm
 5641 qemu      20   0 1567m 381m 3048 S  0.8  1.2   0:02.60  0 qemu-kvm
 5643 qemu      20   0 1567m 381m 3048 S  0.6  1.2   0:02.76  0 qemu-kvm

Changing nice value for one of the vcpu threads to -20:


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+   P COMMAND                                                                   
 5640 qemu       0 -20 1567m 381m 3048 R 45.5  1.2   0:19.67  0 qemu-kvm
 5641 qemu      20   0 1567m 381m 3048 R  0.4  1.2   0:03.33  0 qemu-kvm
 5642 qemu      20   0 1567m 381m 3048 R  0.4  1.2   0:03.16  0 qemu-kvm
 5643 qemu      20   0 1567m 381m 3048 R  0.4  1.2   0:03.36  0 qemu-kvm
	
Changing nice value for another of the vcpu threads to -20:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+   P COMMAND                                                                   
 5640 qemu       0 -20 1567m 381m 3048 S 35.7  1.2   0:30.92  0 qemu-kvm
 5641 qemu       0 -20 1567m 381m 3048 S 26.1  1.2   0:04.77  0 qemu-kvm
 5642 qemu      20   0 1567m 381m 3048 S  0.2  1.2   0:03.29  0 qemu-kvm
 5643 qemu      20   0 1567m 381m 3048 S  0.2  1.2   0:03.50  0 qemu-kvm

Is this behavior expected? Any explanation for this behavior?

- vatsa
 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux